Dec 13 03:44:27 crc systemd[1]: Starting Kubernetes Kubelet... Dec 13 03:44:27 crc restorecon[4697]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:27 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 03:44:28 crc restorecon[4697]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 03:44:28 crc restorecon[4697]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 13 03:44:29 crc kubenswrapper[4766]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 03:44:29 crc kubenswrapper[4766]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 13 03:44:29 crc kubenswrapper[4766]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 03:44:29 crc kubenswrapper[4766]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 03:44:29 crc kubenswrapper[4766]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 03:44:29 crc kubenswrapper[4766]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.265411 4766 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279500 4766 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279567 4766 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279577 4766 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279582 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279590 4766 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279602 4766 feature_gate.go:330] unrecognized feature gate: Example Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279614 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279623 4766 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279629 4766 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279634 4766 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279639 4766 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279645 4766 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279653 4766 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279665 4766 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279672 4766 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279679 4766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279687 4766 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279693 4766 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279699 4766 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279704 4766 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279709 4766 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279714 4766 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279718 4766 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279723 4766 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279729 4766 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279734 4766 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279739 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279745 4766 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279781 4766 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279787 4766 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279792 4766 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279797 4766 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279831 4766 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279837 4766 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279842 4766 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279847 4766 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279852 4766 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279857 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279862 4766 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279869 4766 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279886 4766 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279890 4766 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279894 4766 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279901 4766 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279906 4766 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279910 4766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279914 4766 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279919 4766 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279925 4766 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279931 4766 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279935 4766 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279940 4766 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279944 4766 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279948 4766 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279953 4766 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279957 4766 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279962 4766 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279967 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279971 4766 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279975 4766 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279979 4766 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279983 4766 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279986 4766 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279990 4766 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279994 4766 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.279999 4766 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.280003 4766 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.280007 4766 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.280011 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.280015 4766 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.280019 4766 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280150 4766 flags.go:64] FLAG: --address="0.0.0.0" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280168 4766 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280179 4766 flags.go:64] FLAG: --anonymous-auth="true" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280186 4766 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280194 4766 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280201 4766 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280209 4766 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280224 4766 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280231 4766 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280237 4766 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280249 4766 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280259 4766 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280265 4766 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280271 4766 flags.go:64] FLAG: --cgroup-root="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280276 4766 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280282 4766 flags.go:64] FLAG: --client-ca-file="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280288 4766 flags.go:64] FLAG: --cloud-config="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280293 4766 flags.go:64] FLAG: --cloud-provider="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280297 4766 flags.go:64] FLAG: --cluster-dns="[]" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280304 4766 flags.go:64] FLAG: --cluster-domain="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280309 4766 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280314 4766 flags.go:64] FLAG: --config-dir="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280318 4766 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280323 4766 flags.go:64] FLAG: --container-log-max-files="5" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280330 4766 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280334 4766 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280339 4766 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280344 4766 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280348 4766 flags.go:64] FLAG: --contention-profiling="false" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280353 4766 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280357 4766 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280362 4766 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280367 4766 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280374 4766 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280379 4766 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280384 4766 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280389 4766 flags.go:64] FLAG: --enable-load-reader="false" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280396 4766 flags.go:64] FLAG: --enable-server="true" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280402 4766 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280411 4766 flags.go:64] FLAG: --event-burst="100" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280417 4766 flags.go:64] FLAG: --event-qps="50" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280422 4766 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280431 4766 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280457 4766 flags.go:64] FLAG: --eviction-hard="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280465 4766 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280471 4766 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280476 4766 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280482 4766 flags.go:64] FLAG: --eviction-soft="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280488 4766 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280493 4766 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280499 4766 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280504 4766 flags.go:64] FLAG: --experimental-mounter-path="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280509 4766 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280514 4766 flags.go:64] FLAG: --fail-swap-on="true" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280519 4766 flags.go:64] FLAG: --feature-gates="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280526 4766 flags.go:64] FLAG: --file-check-frequency="20s" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280531 4766 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280538 4766 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280545 4766 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280551 4766 flags.go:64] FLAG: --healthz-port="10248" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280557 4766 flags.go:64] FLAG: --help="false" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280564 4766 flags.go:64] FLAG: --hostname-override="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280569 4766 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280576 4766 flags.go:64] FLAG: --http-check-frequency="20s" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280594 4766 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280601 4766 flags.go:64] FLAG: --image-credential-provider-config="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280611 4766 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280617 4766 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280622 4766 flags.go:64] FLAG: --image-service-endpoint="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280636 4766 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280641 4766 flags.go:64] FLAG: --kube-api-burst="100" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280647 4766 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280653 4766 flags.go:64] FLAG: --kube-api-qps="50" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280658 4766 flags.go:64] FLAG: --kube-reserved="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280663 4766 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280668 4766 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280674 4766 flags.go:64] FLAG: --kubelet-cgroups="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280679 4766 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280685 4766 flags.go:64] FLAG: --lock-file="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280690 4766 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280696 4766 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280702 4766 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280712 4766 flags.go:64] FLAG: --log-json-split-stream="false" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280719 4766 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280724 4766 flags.go:64] FLAG: --log-text-split-stream="false" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280729 4766 flags.go:64] FLAG: --logging-format="text" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280733 4766 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280738 4766 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280743 4766 flags.go:64] FLAG: --manifest-url="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280747 4766 flags.go:64] FLAG: --manifest-url-header="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280754 4766 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280759 4766 flags.go:64] FLAG: --max-open-files="1000000" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280765 4766 flags.go:64] FLAG: --max-pods="110" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280769 4766 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280773 4766 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280778 4766 flags.go:64] FLAG: --memory-manager-policy="None" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280786 4766 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280792 4766 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280796 4766 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280802 4766 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280823 4766 flags.go:64] FLAG: --node-status-max-images="50" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280831 4766 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280837 4766 flags.go:64] FLAG: --oom-score-adj="-999" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280842 4766 flags.go:64] FLAG: --pod-cidr="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280847 4766 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280858 4766 flags.go:64] FLAG: --pod-manifest-path="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280863 4766 flags.go:64] FLAG: --pod-max-pids="-1" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280869 4766 flags.go:64] FLAG: --pods-per-core="0" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280874 4766 flags.go:64] FLAG: --port="10250" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280879 4766 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280884 4766 flags.go:64] FLAG: --provider-id="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280889 4766 flags.go:64] FLAG: --qos-reserved="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280895 4766 flags.go:64] FLAG: --read-only-port="10255" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280900 4766 flags.go:64] FLAG: --register-node="true" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280906 4766 flags.go:64] FLAG: --register-schedulable="true" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280910 4766 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280933 4766 flags.go:64] FLAG: --registry-burst="10" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280939 4766 flags.go:64] FLAG: --registry-qps="5" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280944 4766 flags.go:64] FLAG: --reserved-cpus="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280951 4766 flags.go:64] FLAG: --reserved-memory="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280959 4766 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280965 4766 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280970 4766 flags.go:64] FLAG: --rotate-certificates="false" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280975 4766 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280980 4766 flags.go:64] FLAG: --runonce="false" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280984 4766 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280988 4766 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280993 4766 flags.go:64] FLAG: --seccomp-default="false" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.280999 4766 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.281005 4766 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.281011 4766 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.281016 4766 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.281022 4766 flags.go:64] FLAG: --storage-driver-password="root" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.281027 4766 flags.go:64] FLAG: --storage-driver-secure="false" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.281035 4766 flags.go:64] FLAG: --storage-driver-table="stats" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.281041 4766 flags.go:64] FLAG: --storage-driver-user="root" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.281047 4766 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.281053 4766 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.281058 4766 flags.go:64] FLAG: --system-cgroups="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.281063 4766 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.281072 4766 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.281078 4766 flags.go:64] FLAG: --tls-cert-file="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.281083 4766 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.281092 4766 flags.go:64] FLAG: --tls-min-version="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.281098 4766 flags.go:64] FLAG: --tls-private-key-file="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.281104 4766 flags.go:64] FLAG: --topology-manager-policy="none" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.281110 4766 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.281116 4766 flags.go:64] FLAG: --topology-manager-scope="container" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.281123 4766 flags.go:64] FLAG: --v="2" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.281132 4766 flags.go:64] FLAG: --version="false" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.281143 4766 flags.go:64] FLAG: --vmodule="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.281151 4766 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.281156 4766 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281292 4766 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281303 4766 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281311 4766 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281316 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281320 4766 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281324 4766 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281328 4766 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281333 4766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281337 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281341 4766 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281344 4766 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281348 4766 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281352 4766 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281358 4766 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281362 4766 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281366 4766 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281370 4766 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281374 4766 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281378 4766 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281383 4766 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281387 4766 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281391 4766 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281395 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281399 4766 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281403 4766 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281407 4766 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281411 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281415 4766 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281419 4766 feature_gate.go:330] unrecognized feature gate: Example Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281423 4766 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281431 4766 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281457 4766 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281462 4766 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281467 4766 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281471 4766 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281474 4766 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281478 4766 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281483 4766 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281489 4766 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281493 4766 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281497 4766 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281501 4766 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281505 4766 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281509 4766 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281513 4766 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281517 4766 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281520 4766 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281524 4766 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281537 4766 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281542 4766 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281546 4766 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281549 4766 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281553 4766 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281557 4766 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281560 4766 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281564 4766 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281568 4766 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281571 4766 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281575 4766 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281579 4766 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281582 4766 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281586 4766 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281591 4766 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281595 4766 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281599 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281604 4766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281608 4766 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281611 4766 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281616 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281620 4766 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.281625 4766 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.281840 4766 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.290964 4766 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.291013 4766 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291111 4766 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291126 4766 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291131 4766 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291135 4766 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291143 4766 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291149 4766 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291155 4766 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291159 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291164 4766 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291168 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291172 4766 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291176 4766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291180 4766 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291185 4766 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291190 4766 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291199 4766 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291204 4766 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291208 4766 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291213 4766 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291219 4766 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291224 4766 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291228 4766 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291233 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291238 4766 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291242 4766 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291246 4766 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291250 4766 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291255 4766 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291260 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291264 4766 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291269 4766 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291273 4766 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291278 4766 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291283 4766 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291290 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291294 4766 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291298 4766 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291302 4766 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291307 4766 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291311 4766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291315 4766 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291319 4766 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291324 4766 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291327 4766 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291331 4766 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291335 4766 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291339 4766 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291343 4766 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291346 4766 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291350 4766 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291354 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291358 4766 feature_gate.go:330] unrecognized feature gate: Example Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291362 4766 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291366 4766 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291370 4766 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291376 4766 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291381 4766 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291386 4766 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291391 4766 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291396 4766 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291400 4766 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291404 4766 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291409 4766 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291414 4766 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291419 4766 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291426 4766 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291446 4766 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291452 4766 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291456 4766 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291461 4766 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291470 4766 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.291480 4766 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291605 4766 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291616 4766 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291620 4766 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291624 4766 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291629 4766 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291633 4766 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291636 4766 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291640 4766 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291644 4766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291649 4766 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291655 4766 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291659 4766 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291663 4766 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291667 4766 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291671 4766 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291675 4766 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291679 4766 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291682 4766 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291686 4766 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291690 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291693 4766 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291697 4766 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291701 4766 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291705 4766 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291708 4766 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291712 4766 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291716 4766 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291719 4766 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291723 4766 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291727 4766 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291731 4766 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291735 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291739 4766 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291743 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291747 4766 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291751 4766 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291755 4766 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291760 4766 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291764 4766 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291769 4766 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291774 4766 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291778 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291782 4766 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291786 4766 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291789 4766 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291793 4766 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291797 4766 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291801 4766 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291804 4766 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291808 4766 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291811 4766 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291816 4766 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291821 4766 feature_gate.go:330] unrecognized feature gate: Example Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291825 4766 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291829 4766 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291834 4766 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291839 4766 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291875 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291884 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291889 4766 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291894 4766 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291898 4766 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291903 4766 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291910 4766 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291914 4766 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291920 4766 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291926 4766 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291931 4766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291935 4766 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291939 4766 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.291944 4766 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.291951 4766 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.292174 4766 server.go:940] "Client rotation is on, will bootstrap in background" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.295079 4766 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.295184 4766 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.295685 4766 server.go:997] "Starting client certificate rotation" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.295717 4766 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.296080 4766 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-14 09:07:40.690429914 +0000 UTC Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.296189 4766 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 29h23m11.394246947s for next certificate rotation Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.308689 4766 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.311411 4766 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.322645 4766 log.go:25] "Validated CRI v1 runtime API" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.338916 4766 log.go:25] "Validated CRI v1 image API" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.340696 4766 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.345199 4766 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-13-03-40-05-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.345240 4766 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.368808 4766 manager.go:217] Machine: {Timestamp:2025-12-13 03:44:29.367304895 +0000 UTC m=+0.877237879 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:2794a81b-3be3-453b-be1b-91ab43e5fda5 BootID:50a96b01-4309-433a-9f85-4245993e96cc Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:72:f0:01 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:72:f0:01 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:d8:b7:84 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:16:94:6f Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:1e:05:80 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:18:f9:29 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:e6:f2:57:68:0a:b1 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ba:29:d6:8b:d1:cb Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.369448 4766 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.369732 4766 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.370256 4766 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.370556 4766 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.370658 4766 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.370962 4766 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.371023 4766 container_manager_linux.go:303] "Creating device plugin manager" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.371214 4766 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.371563 4766 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.371924 4766 state_mem.go:36] "Initialized new in-memory state store" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.372105 4766 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.372953 4766 kubelet.go:418] "Attempting to sync node with API server" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.373046 4766 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.373160 4766 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.373236 4766 kubelet.go:324] "Adding apiserver pod source" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.373304 4766 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.375449 4766 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.375988 4766 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.376299 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Dec 13 03:44:29 crc kubenswrapper[4766]: E1213 03:44:29.376582 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.376671 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Dec 13 03:44:29 crc kubenswrapper[4766]: E1213 03:44:29.376760 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.376851 4766 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.377494 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.377520 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.377531 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.377549 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.377561 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.377589 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.377600 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.377615 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.377625 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.377636 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.377658 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.377667 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.377840 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.378325 4766 server.go:1280] "Started kubelet" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.378768 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.378963 4766 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.378965 4766 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.379457 4766 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 03:44:29 crc systemd[1]: Started Kubernetes Kubelet. Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.380889 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.380971 4766 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.381065 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 00:30:06.041716985 +0000 UTC Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.381110 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 164h45m36.660609744s for next certificate rotation Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.381160 4766 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.381168 4766 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.381233 4766 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 03:44:29 crc kubenswrapper[4766]: E1213 03:44:29.381590 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.385249 4766 server.go:460] "Adding debug handlers to kubelet server" Dec 13 03:44:29 crc kubenswrapper[4766]: E1213 03:44:29.388337 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" interval="200ms" Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.388348 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Dec 13 03:44:29 crc kubenswrapper[4766]: E1213 03:44:29.388418 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.388731 4766 factory.go:55] Registering systemd factory Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.388924 4766 factory.go:221] Registration of the systemd container factory successfully Dec 13 03:44:29 crc kubenswrapper[4766]: E1213 03:44:29.388521 4766 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.212:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1880a98b3a9f3d87 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 03:44:29.378297223 +0000 UTC m=+0.888230197,LastTimestamp:2025-12-13 03:44:29.378297223 +0000 UTC m=+0.888230197,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.389955 4766 factory.go:153] Registering CRI-O factory Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.389975 4766 factory.go:221] Registration of the crio container factory successfully Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.390048 4766 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.390087 4766 factory.go:103] Registering Raw factory Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.390109 4766 manager.go:1196] Started watching for new ooms in manager Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.396912 4766 manager.go:319] Starting recovery of all containers Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.399539 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.399614 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.399625 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.399636 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.399646 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.399656 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.399679 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.399689 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.399700 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.399708 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.399718 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.399726 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.399739 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.399754 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.399813 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.399852 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.399862 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.399871 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.399881 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.399893 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.399904 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.399915 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.399926 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.399957 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.399971 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.399980 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.400011 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.400022 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.400031 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.400042 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.400053 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.400062 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.400072 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.400082 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.400092 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.400119 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.400128 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.400137 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.400147 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.400157 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.400166 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.400176 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.400186 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.400196 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.400205 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.400220 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.400235 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.400247 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.400256 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.400266 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.400278 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.400290 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.400312 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.400325 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.400335 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.400346 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.400392 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.400402 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.400412 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401282 4766 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401312 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401324 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401335 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401346 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401358 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401369 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401380 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401392 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401408 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401421 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401452 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401463 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401474 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401487 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401496 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401534 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401546 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401559 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401573 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401582 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401600 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401612 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401625 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401642 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401657 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401669 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401679 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401690 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401735 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401752 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401762 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401794 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401806 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401815 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401827 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401836 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401848 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401860 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401870 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401888 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401900 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401913 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401929 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401944 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401956 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401972 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401985 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.401996 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402009 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402019 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402032 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402045 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402055 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402065 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402076 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402088 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402098 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402108 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402119 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402130 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402140 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402151 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402161 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402172 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402183 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402195 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402212 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402224 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402235 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402245 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402256 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402266 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402277 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402288 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402298 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402318 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402329 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402339 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402351 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402362 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402372 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402384 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402395 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402407 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402425 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402514 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402546 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402561 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402578 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402591 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402604 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402616 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402663 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402705 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402720 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402730 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402740 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402751 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402764 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402776 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402799 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402869 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402885 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402900 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402909 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402920 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402930 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402939 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402950 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402958 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402967 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402976 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.402991 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403003 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403013 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403022 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403033 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403048 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403060 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403075 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403094 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403116 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403132 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403143 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403164 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403204 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403223 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403264 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403284 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403322 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403334 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403351 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403365 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403402 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403414 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403444 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403463 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403476 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403497 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403506 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403516 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403524 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403536 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403624 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403637 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403649 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403660 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403675 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403686 4766 reconstruct.go:97] "Volume reconstruction finished" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.403694 4766 reconciler.go:26] "Reconciler: start to sync state" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.415685 4766 manager.go:324] Recovery completed Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.462737 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.464872 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.464921 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.464931 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.465660 4766 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.465677 4766 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.465701 4766 state_mem.go:36] "Initialized new in-memory state store" Dec 13 03:44:29 crc kubenswrapper[4766]: E1213 03:44:29.481744 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:29 crc kubenswrapper[4766]: E1213 03:44:29.581958 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:29 crc kubenswrapper[4766]: E1213 03:44:29.589950 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" interval="400ms" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.603426 4766 policy_none.go:49] "None policy: Start" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.606808 4766 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.606887 4766 state_mem.go:35] "Initializing new in-memory state store" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.612882 4766 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.614851 4766 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.614919 4766 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.614964 4766 kubelet.go:2335] "Starting kubelet main sync loop" Dec 13 03:44:29 crc kubenswrapper[4766]: E1213 03:44:29.615047 4766 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 03:44:29 crc kubenswrapper[4766]: W1213 03:44:29.624225 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Dec 13 03:44:29 crc kubenswrapper[4766]: E1213 03:44:29.624355 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Dec 13 03:44:29 crc kubenswrapper[4766]: E1213 03:44:29.693614 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:29 crc kubenswrapper[4766]: E1213 03:44:29.715269 4766 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.727086 4766 manager.go:334] "Starting Device Plugin manager" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.727156 4766 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.727170 4766 server.go:79] "Starting device plugin registration server" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.727727 4766 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.727758 4766 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.728591 4766 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.728701 4766 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.728711 4766 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 03:44:29 crc kubenswrapper[4766]: E1213 03:44:29.750927 4766 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.827976 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.829130 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.829166 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.829177 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.829204 4766 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 13 03:44:29 crc kubenswrapper[4766]: E1213 03:44:29.830088 4766 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.212:6443: connect: connection refused" node="crc" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.916228 4766 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.916397 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.917823 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.917880 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.917892 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.918125 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.918473 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.918515 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.919296 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.919345 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.919355 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.919561 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.919568 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.919585 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.919593 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.919929 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.919958 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.920455 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.920540 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.920556 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.920665 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.921220 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.921276 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.922620 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.922650 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.922663 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.922916 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.922939 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.922950 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.923090 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.923455 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.923477 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.923766 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.923779 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.923787 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.924995 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.925016 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.925027 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.925335 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.925369 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.925731 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.925744 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.925751 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.926354 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.926372 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:29 crc kubenswrapper[4766]: I1213 03:44:29.926382 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:29 crc kubenswrapper[4766]: E1213 03:44:29.990815 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" interval="800ms" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.011177 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.011207 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.011229 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.011247 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.011261 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.011281 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.011296 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.011310 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.011375 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.011392 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.011581 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.011671 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.011698 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.011737 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.011805 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.030690 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.032023 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.032054 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.032064 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.032105 4766 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 13 03:44:30 crc kubenswrapper[4766]: E1213 03:44:30.032572 4766 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.212:6443: connect: connection refused" node="crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.113372 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.113457 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.113517 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.113540 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.113557 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.113591 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.113607 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.113663 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.113692 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.113708 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.113722 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.113753 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.113767 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.113784 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.113801 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.113825 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.114044 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.114080 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.114112 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.114156 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.114208 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.114256 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.114301 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.114325 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.114355 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.114378 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.114412 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.114455 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.114494 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.114532 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: W1213 03:44:30.228086 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Dec 13 03:44:30 crc kubenswrapper[4766]: E1213 03:44:30.228184 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.243787 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.251604 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.268010 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.281986 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.288565 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 13 03:44:30 crc kubenswrapper[4766]: W1213 03:44:30.300187 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-28925ca0e0f4f4516a3874ac3e4f72faba5fae63961e94c0c247b89829854b13 WatchSource:0}: Error finding container 28925ca0e0f4f4516a3874ac3e4f72faba5fae63961e94c0c247b89829854b13: Status 404 returned error can't find the container with id 28925ca0e0f4f4516a3874ac3e4f72faba5fae63961e94c0c247b89829854b13 Dec 13 03:44:30 crc kubenswrapper[4766]: W1213 03:44:30.306546 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-ca69d0e9dcf1297dbb67914f36e5523e4feada0d6f666e381b6fe6c9d46abb24 WatchSource:0}: Error finding container ca69d0e9dcf1297dbb67914f36e5523e4feada0d6f666e381b6fe6c9d46abb24: Status 404 returned error can't find the container with id ca69d0e9dcf1297dbb67914f36e5523e4feada0d6f666e381b6fe6c9d46abb24 Dec 13 03:44:30 crc kubenswrapper[4766]: W1213 03:44:30.307422 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-32da503e55e5bcf15ddb36018e53af66e7d5282e183d57895f7fea4c9144ebf4 WatchSource:0}: Error finding container 32da503e55e5bcf15ddb36018e53af66e7d5282e183d57895f7fea4c9144ebf4: Status 404 returned error can't find the container with id 32da503e55e5bcf15ddb36018e53af66e7d5282e183d57895f7fea4c9144ebf4 Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.380730 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.433495 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.435290 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.435360 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.435375 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.435405 4766 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 13 03:44:30 crc kubenswrapper[4766]: E1213 03:44:30.436053 4766 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.212:6443: connect: connection refused" node="crc" Dec 13 03:44:30 crc kubenswrapper[4766]: W1213 03:44:30.573086 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Dec 13 03:44:30 crc kubenswrapper[4766]: E1213 03:44:30.573213 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Dec 13 03:44:30 crc kubenswrapper[4766]: W1213 03:44:30.650507 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Dec 13 03:44:30 crc kubenswrapper[4766]: E1213 03:44:30.650636 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.697869 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e6aca2351eafe556ce7a87a1829d0bb5ab9cdeec958ac252b35f43f0204083e1"} Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.699392 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"bd10ae2c432c421e92de4dd4967c815c5f66d58ce1649d97d103eb6642091d7b"} Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.700266 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ca69d0e9dcf1297dbb67914f36e5523e4feada0d6f666e381b6fe6c9d46abb24"} Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.701308 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"28925ca0e0f4f4516a3874ac3e4f72faba5fae63961e94c0c247b89829854b13"} Dec 13 03:44:30 crc kubenswrapper[4766]: I1213 03:44:30.702158 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"32da503e55e5bcf15ddb36018e53af66e7d5282e183d57895f7fea4c9144ebf4"} Dec 13 03:44:30 crc kubenswrapper[4766]: E1213 03:44:30.791470 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" interval="1.6s" Dec 13 03:44:31 crc kubenswrapper[4766]: W1213 03:44:31.189787 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Dec 13 03:44:31 crc kubenswrapper[4766]: E1213 03:44:31.190247 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.237312 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.240125 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.240200 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.240214 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.240394 4766 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 13 03:44:31 crc kubenswrapper[4766]: E1213 03:44:31.241497 4766 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.212:6443: connect: connection refused" node="crc" Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.380138 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.719626 4766 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="925bdd1562b0313774192c0c4918ef30b06fd647670dd02594335c47f0f6117a" exitCode=0 Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.719723 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"925bdd1562b0313774192c0c4918ef30b06fd647670dd02594335c47f0f6117a"} Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.719781 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.722012 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.722040 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.722051 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.724547 4766 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="695be1c53bcda062deadefe90c3b3c333ce4631f2b0ec3234d15d844337263a3" exitCode=0 Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.724609 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.724669 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"695be1c53bcda062deadefe90c3b3c333ce4631f2b0ec3234d15d844337263a3"} Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.726404 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.726456 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.726474 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.729178 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7"} Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.729231 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522"} Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.729246 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f"} Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.731998 4766 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="81bcb7b38fd64f78f1d79a333a8ec77d630c276d966dd603b39b0f3eaa14310d" exitCode=0 Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.732100 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"81bcb7b38fd64f78f1d79a333a8ec77d630c276d966dd603b39b0f3eaa14310d"} Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.732195 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.733266 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.733326 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.733342 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.734103 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1" exitCode=0 Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.734156 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1"} Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.734224 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.735062 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.735099 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.735111 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.736589 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.739203 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.739264 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:31 crc kubenswrapper[4766]: I1213 03:44:31.739281 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:32 crc kubenswrapper[4766]: W1213 03:44:32.238411 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Dec 13 03:44:32 crc kubenswrapper[4766]: E1213 03:44:32.238604 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Dec 13 03:44:32 crc kubenswrapper[4766]: W1213 03:44:32.370502 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Dec 13 03:44:32 crc kubenswrapper[4766]: E1213 03:44:32.370672 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Dec 13 03:44:32 crc kubenswrapper[4766]: I1213 03:44:32.380349 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Dec 13 03:44:32 crc kubenswrapper[4766]: E1213 03:44:32.392841 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" interval="3.2s" Dec 13 03:44:32 crc kubenswrapper[4766]: I1213 03:44:32.851930 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:32 crc kubenswrapper[4766]: I1213 03:44:32.863676 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:32 crc kubenswrapper[4766]: I1213 03:44:32.863746 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:32 crc kubenswrapper[4766]: I1213 03:44:32.863765 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:32 crc kubenswrapper[4766]: I1213 03:44:32.863814 4766 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 13 03:44:32 crc kubenswrapper[4766]: E1213 03:44:32.864616 4766 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.212:6443: connect: connection refused" node="crc" Dec 13 03:44:32 crc kubenswrapper[4766]: I1213 03:44:32.867787 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:32 crc kubenswrapper[4766]: I1213 03:44:32.868318 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"4af5ec01dcfebc7f2efef3b21b8a15139f420a6f353fa1931e9a6301af5b5779"} Dec 13 03:44:32 crc kubenswrapper[4766]: I1213 03:44:32.868833 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:32 crc kubenswrapper[4766]: I1213 03:44:32.868857 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:32 crc kubenswrapper[4766]: I1213 03:44:32.868872 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:32 crc kubenswrapper[4766]: I1213 03:44:32.875375 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b"} Dec 13 03:44:32 crc kubenswrapper[4766]: I1213 03:44:32.875547 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:32 crc kubenswrapper[4766]: I1213 03:44:32.878347 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:32 crc kubenswrapper[4766]: I1213 03:44:32.878464 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:32 crc kubenswrapper[4766]: I1213 03:44:32.878481 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:32 crc kubenswrapper[4766]: I1213 03:44:32.881560 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7a615f91bcd70b5108f8a100a56aba2d311a265d13e8d2468f56720d88cab3bb"} Dec 13 03:44:32 crc kubenswrapper[4766]: I1213 03:44:32.881625 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d2c24d5ee6235813ce84ab1be241b9812df0541d5caefec1daa25ff912d31e2a"} Dec 13 03:44:32 crc kubenswrapper[4766]: I1213 03:44:32.900771 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339"} Dec 13 03:44:32 crc kubenswrapper[4766]: I1213 03:44:32.900865 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c"} Dec 13 03:44:32 crc kubenswrapper[4766]: I1213 03:44:32.904495 4766 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0bbcd91b71b86b09b91f066e8a83d34239b7486f0f166275d92c6ba2305842da" exitCode=0 Dec 13 03:44:32 crc kubenswrapper[4766]: I1213 03:44:32.904589 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0bbcd91b71b86b09b91f066e8a83d34239b7486f0f166275d92c6ba2305842da"} Dec 13 03:44:32 crc kubenswrapper[4766]: I1213 03:44:32.904653 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:32 crc kubenswrapper[4766]: I1213 03:44:32.906009 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:32 crc kubenswrapper[4766]: I1213 03:44:32.906091 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:32 crc kubenswrapper[4766]: I1213 03:44:32.906109 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:32 crc kubenswrapper[4766]: W1213 03:44:32.969591 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Dec 13 03:44:32 crc kubenswrapper[4766]: E1213 03:44:32.969719 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Dec 13 03:44:33 crc kubenswrapper[4766]: E1213 03:44:33.046484 4766 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.212:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1880a98b3a9f3d87 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 03:44:29.378297223 +0000 UTC m=+0.888230197,LastTimestamp:2025-12-13 03:44:29.378297223 +0000 UTC m=+0.888230197,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 03:44:33 crc kubenswrapper[4766]: I1213 03:44:33.380198 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Dec 13 03:44:33 crc kubenswrapper[4766]: W1213 03:44:33.636644 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Dec 13 03:44:33 crc kubenswrapper[4766]: E1213 03:44:33.636755 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.212:6443: connect: connection refused" logger="UnhandledError" Dec 13 03:44:33 crc kubenswrapper[4766]: I1213 03:44:33.942881 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"efdbcdf88d663ac4625bbb1cb0a9f79df3687b878f6873e2767817649a5c0d07"} Dec 13 03:44:33 crc kubenswrapper[4766]: I1213 03:44:33.942824 4766 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="efdbcdf88d663ac4625bbb1cb0a9f79df3687b878f6873e2767817649a5c0d07" exitCode=0 Dec 13 03:44:33 crc kubenswrapper[4766]: I1213 03:44:33.942960 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:33 crc kubenswrapper[4766]: I1213 03:44:33.944216 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:33 crc kubenswrapper[4766]: I1213 03:44:33.944256 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:33 crc kubenswrapper[4766]: I1213 03:44:33.944275 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:33 crc kubenswrapper[4766]: I1213 03:44:33.948661 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"467ab5b59595a4d878501ba9fd0d98d3fbee40937722bf79484c10d1bf230253"} Dec 13 03:44:33 crc kubenswrapper[4766]: I1213 03:44:33.948703 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:33 crc kubenswrapper[4766]: I1213 03:44:33.949488 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:33 crc kubenswrapper[4766]: I1213 03:44:33.949519 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:33 crc kubenswrapper[4766]: I1213 03:44:33.949529 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:33 crc kubenswrapper[4766]: I1213 03:44:33.952039 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d"} Dec 13 03:44:33 crc kubenswrapper[4766]: I1213 03:44:33.952082 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:33 crc kubenswrapper[4766]: I1213 03:44:33.952092 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:33 crc kubenswrapper[4766]: I1213 03:44:33.953110 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:33 crc kubenswrapper[4766]: I1213 03:44:33.953142 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:33 crc kubenswrapper[4766]: I1213 03:44:33.953153 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:33 crc kubenswrapper[4766]: I1213 03:44:33.953180 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:33 crc kubenswrapper[4766]: I1213 03:44:33.953202 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:33 crc kubenswrapper[4766]: I1213 03:44:33.953213 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:34 crc kubenswrapper[4766]: I1213 03:44:34.399242 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Dec 13 03:44:34 crc kubenswrapper[4766]: I1213 03:44:34.957199 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d6a465322e23e44192735a0f63dc7a766842efd61fd71121026013424e12a2f9"} Dec 13 03:44:34 crc kubenswrapper[4766]: I1213 03:44:34.964589 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 03:44:34 crc kubenswrapper[4766]: I1213 03:44:34.964656 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:34 crc kubenswrapper[4766]: I1213 03:44:34.964564 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"88091f2417623f21f49464b7d45d40f6b74d9cd736632e5570b0614a155e8aee"} Dec 13 03:44:34 crc kubenswrapper[4766]: I1213 03:44:34.964754 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0"} Dec 13 03:44:34 crc kubenswrapper[4766]: I1213 03:44:34.964700 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:34 crc kubenswrapper[4766]: I1213 03:44:34.965947 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:34 crc kubenswrapper[4766]: I1213 03:44:34.965982 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:34 crc kubenswrapper[4766]: I1213 03:44:34.965990 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:34 crc kubenswrapper[4766]: I1213 03:44:34.966011 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:34 crc kubenswrapper[4766]: I1213 03:44:34.966022 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:34 crc kubenswrapper[4766]: I1213 03:44:34.965995 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:35 crc kubenswrapper[4766]: I1213 03:44:35.379596 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.212:6443: connect: connection refused Dec 13 03:44:35 crc kubenswrapper[4766]: I1213 03:44:35.971369 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 03:44:35 crc kubenswrapper[4766]: I1213 03:44:35.971471 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:35 crc kubenswrapper[4766]: I1213 03:44:35.971413 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7bac7ca440be262c41e4bcd74a2237a5964972f5f506cbbbbcace7e141ed2b8b"} Dec 13 03:44:35 crc kubenswrapper[4766]: I1213 03:44:35.971529 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9dc46bb43501ab885f584606572c818f630cd70a9de01d0e7e3193c2a1c98485"} Dec 13 03:44:35 crc kubenswrapper[4766]: I1213 03:44:35.971550 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"00acb07cb7584ae9bae67cbf15179c2174c6802b779904d0c88da61b5e6aa9a4"} Dec 13 03:44:35 crc kubenswrapper[4766]: I1213 03:44:35.972781 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:35 crc kubenswrapper[4766]: I1213 03:44:35.972821 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:35 crc kubenswrapper[4766]: I1213 03:44:35.972833 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:36 crc kubenswrapper[4766]: I1213 03:44:36.065288 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:36 crc kubenswrapper[4766]: I1213 03:44:36.066904 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:36 crc kubenswrapper[4766]: I1213 03:44:36.066943 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:36 crc kubenswrapper[4766]: I1213 03:44:36.066953 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:36 crc kubenswrapper[4766]: I1213 03:44:36.066977 4766 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 13 03:44:36 crc kubenswrapper[4766]: I1213 03:44:36.256205 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:44:36 crc kubenswrapper[4766]: I1213 03:44:36.577697 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 03:44:36 crc kubenswrapper[4766]: I1213 03:44:36.578101 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:36 crc kubenswrapper[4766]: I1213 03:44:36.580166 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:36 crc kubenswrapper[4766]: I1213 03:44:36.580234 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:36 crc kubenswrapper[4766]: I1213 03:44:36.580254 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:36 crc kubenswrapper[4766]: I1213 03:44:36.808683 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:44:36 crc kubenswrapper[4766]: I1213 03:44:36.822478 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:44:37 crc kubenswrapper[4766]: I1213 03:44:37.183879 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:37 crc kubenswrapper[4766]: I1213 03:44:37.183853 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"83ef4e727c5935b9944a4e90b2d4133a089d2a5114a45668bda678c52ee3ff87"} Dec 13 03:44:37 crc kubenswrapper[4766]: I1213 03:44:37.183960 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:37 crc kubenswrapper[4766]: I1213 03:44:37.185696 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:37 crc kubenswrapper[4766]: I1213 03:44:37.185712 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:37 crc kubenswrapper[4766]: I1213 03:44:37.185742 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:37 crc kubenswrapper[4766]: I1213 03:44:37.185759 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:37 crc kubenswrapper[4766]: I1213 03:44:37.185778 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:37 crc kubenswrapper[4766]: I1213 03:44:37.185791 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:37 crc kubenswrapper[4766]: I1213 03:44:37.509882 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 03:44:37 crc kubenswrapper[4766]: I1213 03:44:37.510374 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:37 crc kubenswrapper[4766]: I1213 03:44:37.511999 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:37 crc kubenswrapper[4766]: I1213 03:44:37.512054 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:37 crc kubenswrapper[4766]: I1213 03:44:37.512067 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:37 crc kubenswrapper[4766]: I1213 03:44:37.751087 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 03:44:37 crc kubenswrapper[4766]: I1213 03:44:37.757123 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 03:44:38 crc kubenswrapper[4766]: I1213 03:44:38.186144 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:38 crc kubenswrapper[4766]: I1213 03:44:38.186176 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:38 crc kubenswrapper[4766]: I1213 03:44:38.186194 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 03:44:38 crc kubenswrapper[4766]: I1213 03:44:38.186155 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:38 crc kubenswrapper[4766]: I1213 03:44:38.187520 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:38 crc kubenswrapper[4766]: I1213 03:44:38.187571 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:38 crc kubenswrapper[4766]: I1213 03:44:38.187591 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:38 crc kubenswrapper[4766]: I1213 03:44:38.187772 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:38 crc kubenswrapper[4766]: I1213 03:44:38.187808 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:38 crc kubenswrapper[4766]: I1213 03:44:38.187819 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:38 crc kubenswrapper[4766]: I1213 03:44:38.188072 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:38 crc kubenswrapper[4766]: I1213 03:44:38.188100 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:38 crc kubenswrapper[4766]: I1213 03:44:38.188112 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:39 crc kubenswrapper[4766]: I1213 03:44:39.189047 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:39 crc kubenswrapper[4766]: I1213 03:44:39.190070 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:39 crc kubenswrapper[4766]: I1213 03:44:39.190100 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:39 crc kubenswrapper[4766]: I1213 03:44:39.190112 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:39 crc kubenswrapper[4766]: I1213 03:44:39.566859 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 13 03:44:39 crc kubenswrapper[4766]: I1213 03:44:39.567283 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:39 crc kubenswrapper[4766]: I1213 03:44:39.569407 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:39 crc kubenswrapper[4766]: I1213 03:44:39.569484 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:39 crc kubenswrapper[4766]: I1213 03:44:39.569498 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:39 crc kubenswrapper[4766]: E1213 03:44:39.751073 4766 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 13 03:44:40 crc kubenswrapper[4766]: I1213 03:44:40.205047 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 13 03:44:40 crc kubenswrapper[4766]: I1213 03:44:40.205257 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:40 crc kubenswrapper[4766]: I1213 03:44:40.206401 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:40 crc kubenswrapper[4766]: I1213 03:44:40.206471 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:40 crc kubenswrapper[4766]: I1213 03:44:40.206491 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:40 crc kubenswrapper[4766]: I1213 03:44:40.510089 4766 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 03:44:40 crc kubenswrapper[4766]: I1213 03:44:40.510255 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 13 03:44:41 crc kubenswrapper[4766]: I1213 03:44:41.339636 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 03:44:41 crc kubenswrapper[4766]: I1213 03:44:41.339856 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:41 crc kubenswrapper[4766]: I1213 03:44:41.341337 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:41 crc kubenswrapper[4766]: I1213 03:44:41.341388 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:41 crc kubenswrapper[4766]: I1213 03:44:41.341586 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:42 crc kubenswrapper[4766]: I1213 03:44:42.644969 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 03:44:42 crc kubenswrapper[4766]: I1213 03:44:42.645180 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:42 crc kubenswrapper[4766]: I1213 03:44:42.646890 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:42 crc kubenswrapper[4766]: I1213 03:44:42.646942 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:42 crc kubenswrapper[4766]: I1213 03:44:42.646963 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:45 crc kubenswrapper[4766]: E1213 03:44:45.670516 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Dec 13 03:44:45 crc kubenswrapper[4766]: W1213 03:44:45.835150 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 13 03:44:45 crc kubenswrapper[4766]: I1213 03:44:45.835341 4766 trace.go:236] Trace[1317838940]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (13-Dec-2025 03:44:35.810) (total time: 10024ms): Dec 13 03:44:45 crc kubenswrapper[4766]: Trace[1317838940]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10024ms (03:44:45.835) Dec 13 03:44:45 crc kubenswrapper[4766]: Trace[1317838940]: [10.024305273s] [10.024305273s] END Dec 13 03:44:45 crc kubenswrapper[4766]: E1213 03:44:45.835393 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 13 03:44:46 crc kubenswrapper[4766]: E1213 03:44:46.068510 4766 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 13 03:44:46 crc kubenswrapper[4766]: I1213 03:44:46.305304 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 13 03:44:46 crc kubenswrapper[4766]: I1213 03:44:46.306708 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="88091f2417623f21f49464b7d45d40f6b74d9cd736632e5570b0614a155e8aee" exitCode=255 Dec 13 03:44:46 crc kubenswrapper[4766]: I1213 03:44:46.306751 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"88091f2417623f21f49464b7d45d40f6b74d9cd736632e5570b0614a155e8aee"} Dec 13 03:44:46 crc kubenswrapper[4766]: I1213 03:44:46.306946 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:46 crc kubenswrapper[4766]: I1213 03:44:46.307957 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:46 crc kubenswrapper[4766]: I1213 03:44:46.307978 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:46 crc kubenswrapper[4766]: I1213 03:44:46.307989 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:46 crc kubenswrapper[4766]: I1213 03:44:46.308868 4766 scope.go:117] "RemoveContainer" containerID="88091f2417623f21f49464b7d45d40f6b74d9cd736632e5570b0614a155e8aee" Dec 13 03:44:46 crc kubenswrapper[4766]: I1213 03:44:46.382053 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 13 03:44:46 crc kubenswrapper[4766]: I1213 03:44:46.442972 4766 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 13 03:44:46 crc kubenswrapper[4766]: I1213 03:44:46.443055 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 13 03:44:46 crc kubenswrapper[4766]: I1213 03:44:46.453036 4766 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 13 03:44:46 crc kubenswrapper[4766]: I1213 03:44:46.453138 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 13 03:44:46 crc kubenswrapper[4766]: I1213 03:44:46.840359 4766 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 13 03:44:46 crc kubenswrapper[4766]: [+]log ok Dec 13 03:44:46 crc kubenswrapper[4766]: [+]etcd ok Dec 13 03:44:46 crc kubenswrapper[4766]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 13 03:44:46 crc kubenswrapper[4766]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 13 03:44:46 crc kubenswrapper[4766]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 13 03:44:46 crc kubenswrapper[4766]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 13 03:44:46 crc kubenswrapper[4766]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 13 03:44:46 crc kubenswrapper[4766]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 13 03:44:46 crc kubenswrapper[4766]: [+]poststarthook/generic-apiserver-start-informers ok Dec 13 03:44:46 crc kubenswrapper[4766]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 13 03:44:46 crc kubenswrapper[4766]: [+]poststarthook/priority-and-fairness-filter ok Dec 13 03:44:46 crc kubenswrapper[4766]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 13 03:44:46 crc kubenswrapper[4766]: [+]poststarthook/start-apiextensions-informers ok Dec 13 03:44:46 crc kubenswrapper[4766]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Dec 13 03:44:46 crc kubenswrapper[4766]: [-]poststarthook/crd-informer-synced failed: reason withheld Dec 13 03:44:46 crc kubenswrapper[4766]: [+]poststarthook/start-system-namespaces-controller ok Dec 13 03:44:46 crc kubenswrapper[4766]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 13 03:44:46 crc kubenswrapper[4766]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 13 03:44:46 crc kubenswrapper[4766]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 13 03:44:46 crc kubenswrapper[4766]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 13 03:44:46 crc kubenswrapper[4766]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 13 03:44:46 crc kubenswrapper[4766]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 13 03:44:46 crc kubenswrapper[4766]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Dec 13 03:44:46 crc kubenswrapper[4766]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 13 03:44:46 crc kubenswrapper[4766]: [+]poststarthook/bootstrap-controller ok Dec 13 03:44:46 crc kubenswrapper[4766]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 13 03:44:46 crc kubenswrapper[4766]: [+]poststarthook/start-kube-aggregator-informers ok Dec 13 03:44:46 crc kubenswrapper[4766]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 13 03:44:46 crc kubenswrapper[4766]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 13 03:44:46 crc kubenswrapper[4766]: [-]poststarthook/apiservice-registration-controller failed: reason withheld Dec 13 03:44:46 crc kubenswrapper[4766]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 13 03:44:46 crc kubenswrapper[4766]: [-]poststarthook/apiservice-discovery-controller failed: reason withheld Dec 13 03:44:46 crc kubenswrapper[4766]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 13 03:44:46 crc kubenswrapper[4766]: [+]autoregister-completion ok Dec 13 03:44:46 crc kubenswrapper[4766]: [+]poststarthook/apiservice-openapi-controller ok Dec 13 03:44:46 crc kubenswrapper[4766]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 13 03:44:46 crc kubenswrapper[4766]: livez check failed Dec 13 03:44:46 crc kubenswrapper[4766]: I1213 03:44:46.842273 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 03:44:47 crc kubenswrapper[4766]: I1213 03:44:47.312084 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 13 03:44:47 crc kubenswrapper[4766]: I1213 03:44:47.314560 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3"} Dec 13 03:44:47 crc kubenswrapper[4766]: I1213 03:44:47.314823 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:47 crc kubenswrapper[4766]: I1213 03:44:47.315781 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:47 crc kubenswrapper[4766]: I1213 03:44:47.315942 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:47 crc kubenswrapper[4766]: I1213 03:44:47.316034 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:49 crc kubenswrapper[4766]: I1213 03:44:49.609408 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 13 03:44:49 crc kubenswrapper[4766]: I1213 03:44:49.609860 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:49 crc kubenswrapper[4766]: I1213 03:44:49.617325 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:49 crc kubenswrapper[4766]: I1213 03:44:49.617580 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:49 crc kubenswrapper[4766]: I1213 03:44:49.617611 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:49 crc kubenswrapper[4766]: I1213 03:44:49.631762 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 13 03:44:49 crc kubenswrapper[4766]: E1213 03:44:49.751522 4766 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 13 03:44:50 crc kubenswrapper[4766]: I1213 03:44:50.362962 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:50 crc kubenswrapper[4766]: I1213 03:44:50.364889 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:50 crc kubenswrapper[4766]: I1213 03:44:50.364978 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:50 crc kubenswrapper[4766]: I1213 03:44:50.364995 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:50 crc kubenswrapper[4766]: I1213 03:44:50.510249 4766 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 03:44:50 crc kubenswrapper[4766]: I1213 03:44:50.510412 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 03:44:51 crc kubenswrapper[4766]: I1213 03:44:51.444970 4766 trace.go:236] Trace[1223213351]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (13-Dec-2025 03:44:36.901) (total time: 14543ms): Dec 13 03:44:51 crc kubenswrapper[4766]: Trace[1223213351]: ---"Objects listed" error: 14543ms (03:44:51.444) Dec 13 03:44:51 crc kubenswrapper[4766]: Trace[1223213351]: [14.543319485s] [14.543319485s] END Dec 13 03:44:51 crc kubenswrapper[4766]: I1213 03:44:51.445018 4766 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 13 03:44:51 crc kubenswrapper[4766]: I1213 03:44:51.445636 4766 trace.go:236] Trace[105208146]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (13-Dec-2025 03:44:38.235) (total time: 13209ms): Dec 13 03:44:51 crc kubenswrapper[4766]: Trace[105208146]: ---"Objects listed" error: 13209ms (03:44:51.445) Dec 13 03:44:51 crc kubenswrapper[4766]: Trace[105208146]: [13.209755884s] [13.209755884s] END Dec 13 03:44:51 crc kubenswrapper[4766]: I1213 03:44:51.445720 4766 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 13 03:44:51 crc kubenswrapper[4766]: I1213 03:44:51.448187 4766 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 13 03:44:51 crc kubenswrapper[4766]: I1213 03:44:51.449050 4766 trace.go:236] Trace[1608781776]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (13-Dec-2025 03:44:39.202) (total time: 12246ms): Dec 13 03:44:51 crc kubenswrapper[4766]: Trace[1608781776]: ---"Objects listed" error: 12246ms (03:44:51.448) Dec 13 03:44:51 crc kubenswrapper[4766]: Trace[1608781776]: [12.246373212s] [12.246373212s] END Dec 13 03:44:51 crc kubenswrapper[4766]: I1213 03:44:51.449111 4766 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 13 03:44:51 crc kubenswrapper[4766]: I1213 03:44:51.811844 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:44:51 crc kubenswrapper[4766]: I1213 03:44:51.812041 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:51 crc kubenswrapper[4766]: I1213 03:44:51.812219 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:44:51 crc kubenswrapper[4766]: I1213 03:44:51.813248 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:51 crc kubenswrapper[4766]: I1213 03:44:51.813311 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:51 crc kubenswrapper[4766]: I1213 03:44:51.813327 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:51 crc kubenswrapper[4766]: I1213 03:44:51.817056 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:44:52 crc kubenswrapper[4766]: I1213 03:44:52.369673 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:52 crc kubenswrapper[4766]: I1213 03:44:52.370961 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:52 crc kubenswrapper[4766]: I1213 03:44:52.371095 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:52 crc kubenswrapper[4766]: I1213 03:44:52.371217 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:52 crc kubenswrapper[4766]: I1213 03:44:52.469457 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:52 crc kubenswrapper[4766]: I1213 03:44:52.478827 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:52 crc kubenswrapper[4766]: I1213 03:44:52.478884 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:52 crc kubenswrapper[4766]: I1213 03:44:52.478904 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:52 crc kubenswrapper[4766]: I1213 03:44:52.479102 4766 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 13 03:44:52 crc kubenswrapper[4766]: I1213 03:44:52.527690 4766 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 13 03:44:52 crc kubenswrapper[4766]: I1213 03:44:52.527876 4766 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 13 03:44:52 crc kubenswrapper[4766]: E1213 03:44:52.527899 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 13 03:44:52 crc kubenswrapper[4766]: I1213 03:44:52.532373 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:52 crc kubenswrapper[4766]: I1213 03:44:52.532555 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:52 crc kubenswrapper[4766]: I1213 03:44:52.532676 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:52 crc kubenswrapper[4766]: I1213 03:44:52.532801 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:44:52 crc kubenswrapper[4766]: I1213 03:44:52.532926 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:44:52Z","lastTransitionTime":"2025-12-13T03:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:44:52 crc kubenswrapper[4766]: E1213 03:44:52.548728 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:52 crc kubenswrapper[4766]: I1213 03:44:52.552660 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:52 crc kubenswrapper[4766]: I1213 03:44:52.552686 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:52 crc kubenswrapper[4766]: I1213 03:44:52.552694 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:52 crc kubenswrapper[4766]: I1213 03:44:52.552740 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:44:52 crc kubenswrapper[4766]: I1213 03:44:52.552753 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:44:52Z","lastTransitionTime":"2025-12-13T03:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:44:52 crc kubenswrapper[4766]: E1213 03:44:52.565699 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:52 crc kubenswrapper[4766]: I1213 03:44:52.570328 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:52 crc kubenswrapper[4766]: I1213 03:44:52.570386 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:52 crc kubenswrapper[4766]: I1213 03:44:52.570397 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:52 crc kubenswrapper[4766]: I1213 03:44:52.570415 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:44:52 crc kubenswrapper[4766]: I1213 03:44:52.570443 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:44:52Z","lastTransitionTime":"2025-12-13T03:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:44:52 crc kubenswrapper[4766]: E1213 03:44:52.601901 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:52 crc kubenswrapper[4766]: I1213 03:44:52.643726 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:52 crc kubenswrapper[4766]: I1213 03:44:52.644033 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:52 crc kubenswrapper[4766]: I1213 03:44:52.644181 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:52 crc kubenswrapper[4766]: I1213 03:44:52.644286 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:44:52 crc kubenswrapper[4766]: I1213 03:44:52.644390 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:44:52Z","lastTransitionTime":"2025-12-13T03:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:44:52 crc kubenswrapper[4766]: E1213 03:44:52.910478 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:52 crc kubenswrapper[4766]: E1213 03:44:52.910664 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 13 03:44:52 crc kubenswrapper[4766]: E1213 03:44:52.910705 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:53 crc kubenswrapper[4766]: E1213 03:44:53.011651 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:53 crc kubenswrapper[4766]: E1213 03:44:53.136847 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:53 crc kubenswrapper[4766]: E1213 03:44:53.237826 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:53 crc kubenswrapper[4766]: E1213 03:44:53.338330 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:53 crc kubenswrapper[4766]: I1213 03:44:53.376271 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Dec 13 03:44:53 crc kubenswrapper[4766]: I1213 03:44:53.377234 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 13 03:44:53 crc kubenswrapper[4766]: I1213 03:44:53.379535 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3" exitCode=255 Dec 13 03:44:53 crc kubenswrapper[4766]: I1213 03:44:53.379603 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3"} Dec 13 03:44:53 crc kubenswrapper[4766]: I1213 03:44:53.379676 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:53 crc kubenswrapper[4766]: I1213 03:44:53.379722 4766 scope.go:117] "RemoveContainer" containerID="88091f2417623f21f49464b7d45d40f6b74d9cd736632e5570b0614a155e8aee" Dec 13 03:44:53 crc kubenswrapper[4766]: I1213 03:44:53.380949 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:53 crc kubenswrapper[4766]: I1213 03:44:53.380984 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:53 crc kubenswrapper[4766]: I1213 03:44:53.380996 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:53 crc kubenswrapper[4766]: I1213 03:44:53.382027 4766 scope.go:117] "RemoveContainer" containerID="1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3" Dec 13 03:44:53 crc kubenswrapper[4766]: E1213 03:44:53.382229 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Dec 13 03:44:53 crc kubenswrapper[4766]: E1213 03:44:53.439095 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:53 crc kubenswrapper[4766]: E1213 03:44:53.540236 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:53 crc kubenswrapper[4766]: E1213 03:44:53.643168 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:53 crc kubenswrapper[4766]: E1213 03:44:53.827383 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:53 crc kubenswrapper[4766]: E1213 03:44:53.928314 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:54 crc kubenswrapper[4766]: E1213 03:44:54.028965 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:54 crc kubenswrapper[4766]: E1213 03:44:54.129416 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:54 crc kubenswrapper[4766]: E1213 03:44:54.296010 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:54 crc kubenswrapper[4766]: I1213 03:44:54.384830 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Dec 13 03:44:54 crc kubenswrapper[4766]: I1213 03:44:54.387414 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 03:44:54 crc kubenswrapper[4766]: I1213 03:44:54.388752 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:54 crc kubenswrapper[4766]: I1213 03:44:54.388803 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:54 crc kubenswrapper[4766]: I1213 03:44:54.388815 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:54 crc kubenswrapper[4766]: I1213 03:44:54.389686 4766 scope.go:117] "RemoveContainer" containerID="1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3" Dec 13 03:44:54 crc kubenswrapper[4766]: E1213 03:44:54.389956 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Dec 13 03:44:54 crc kubenswrapper[4766]: E1213 03:44:54.396800 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:54 crc kubenswrapper[4766]: E1213 03:44:54.497661 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:54 crc kubenswrapper[4766]: E1213 03:44:54.598653 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:54 crc kubenswrapper[4766]: E1213 03:44:54.699443 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:54 crc kubenswrapper[4766]: E1213 03:44:54.800007 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:54 crc kubenswrapper[4766]: E1213 03:44:54.900261 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:55 crc kubenswrapper[4766]: E1213 03:44:55.000866 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:55 crc kubenswrapper[4766]: E1213 03:44:55.102562 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:55 crc kubenswrapper[4766]: E1213 03:44:55.203238 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:55 crc kubenswrapper[4766]: E1213 03:44:55.304407 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:55 crc kubenswrapper[4766]: E1213 03:44:55.405868 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:55 crc kubenswrapper[4766]: E1213 03:44:55.506356 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:55 crc kubenswrapper[4766]: E1213 03:44:55.607328 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:55 crc kubenswrapper[4766]: E1213 03:44:55.707828 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:55 crc kubenswrapper[4766]: E1213 03:44:55.808638 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:55 crc kubenswrapper[4766]: E1213 03:44:55.908797 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:56 crc kubenswrapper[4766]: E1213 03:44:56.009843 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:56 crc kubenswrapper[4766]: E1213 03:44:56.110619 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:56 crc kubenswrapper[4766]: E1213 03:44:56.211113 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:56 crc kubenswrapper[4766]: E1213 03:44:56.311914 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:56 crc kubenswrapper[4766]: E1213 03:44:56.412659 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:56 crc kubenswrapper[4766]: E1213 03:44:56.513769 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:56 crc kubenswrapper[4766]: E1213 03:44:56.614718 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:56 crc kubenswrapper[4766]: E1213 03:44:56.716183 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:56 crc kubenswrapper[4766]: E1213 03:44:56.817062 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:56 crc kubenswrapper[4766]: E1213 03:44:56.917649 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:57 crc kubenswrapper[4766]: E1213 03:44:57.018733 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:57 crc kubenswrapper[4766]: E1213 03:44:57.119501 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.181824 4766 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.223690 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.223734 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.223746 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.223774 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.223790 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:44:57Z","lastTransitionTime":"2025-12-13T03:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.327398 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.327826 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.327926 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.328050 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.328157 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:44:57Z","lastTransitionTime":"2025-12-13T03:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.431583 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.431654 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.431669 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.431692 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.431708 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:44:57Z","lastTransitionTime":"2025-12-13T03:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.470851 4766 apiserver.go:52] "Watching apiserver" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.475288 4766 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.475907 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-6n4vc","openshift-multus/multus-additional-cni-plugins-c89xg","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-ovn-kubernetes/ovnkube-node-2dfkj","openshift-machine-config-operator/machine-config-daemon-94w9l","openshift-image-registry/node-ca-n6hlf","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-dns/node-resolver-rxssr"] Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.476578 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.476654 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.476694 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.476728 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.477095 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 13 03:44:57 crc kubenswrapper[4766]: E1213 03:44:57.477130 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:44:57 crc kubenswrapper[4766]: E1213 03:44:57.477389 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.479409 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.479725 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:44:57 crc kubenswrapper[4766]: E1213 03:44:57.480618 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.479775 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.479744 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-c89xg" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.479807 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-n6hlf" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.479852 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.480119 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-rxssr" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.480058 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.480327 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.480335 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.480372 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.480393 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.480524 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.480615 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.480658 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.480684 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.483630 4766 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.484056 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.484316 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.484576 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.484614 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.484653 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.486948 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.489125 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.489659 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.489964 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.491639 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.491686 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.491830 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.493013 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.494387 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.494567 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.604349 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.604671 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.604849 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.605085 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.605239 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.605318 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.605653 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.605763 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606189 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606222 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606244 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606290 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606309 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606325 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606342 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606361 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606384 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606439 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606465 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606483 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606500 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606516 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606532 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606548 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606563 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606589 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606605 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606622 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606642 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606677 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606693 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606709 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606742 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606779 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606799 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606815 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606859 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606893 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606908 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606924 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606941 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606957 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606977 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607008 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607030 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607045 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607066 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607087 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607107 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607128 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607149 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607172 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607196 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607216 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607240 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607256 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607273 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607290 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607307 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607325 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607341 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607358 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607375 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607401 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607459 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607477 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607498 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607515 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607540 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607557 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607574 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607591 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607608 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607635 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607653 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607675 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607695 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607712 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607734 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607751 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607771 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607788 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607807 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607834 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607869 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607903 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607925 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607945 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607961 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607979 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607996 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.608017 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.608032 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.608054 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.608075 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.608093 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.608115 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.608131 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.608152 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.608176 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.608205 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.608226 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.608245 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.608263 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.608281 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.608311 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.608337 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.608357 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.608377 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.608484 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.608503 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.608531 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.609063 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.609085 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.609105 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.609123 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.611214 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.611237 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.611261 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.611280 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.611299 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.611317 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.611336 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.611355 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.611375 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.611440 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.611462 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.611486 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.611506 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.611524 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.611553 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.611571 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.611588 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.611605 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.611624 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.611644 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.611662 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.611679 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.611699 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.611718 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.611736 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.611757 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.611776 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.611795 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.606257 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.611820 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.611986 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.612050 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.612091 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.612111 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.612130 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.612147 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.612166 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.612183 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.612200 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.612220 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.612236 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.612252 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.612573 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.612596 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.612696 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.613247 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.613302 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.613340 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.613359 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.613384 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.613405 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.613444 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.613465 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.613484 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.613564 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.613645 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.613667 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.613712 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.613739 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.613762 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.613779 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.613797 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.613820 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.613850 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.613874 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.613910 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.613934 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.613968 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.614003 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.614030 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.614055 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.614079 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.614107 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.614164 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.614189 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.614230 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.614248 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.614268 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.614287 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.614307 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.614323 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.614345 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.614364 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.614391 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.614411 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.614446 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.614466 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.614484 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.614607 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-run-openvswitch\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.614679 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-cni-netd\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.614738 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.614786 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpg8t\" (UniqueName: \"kubernetes.io/projected/71e6a48b-4f5d-4299-9c7b-98dbe11e670e-kube-api-access-vpg8t\") pod \"machine-config-daemon-94w9l\" (UID: \"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\") " pod="openshift-machine-config-operator/machine-config-daemon-94w9l" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.614812 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-multus-socket-dir-parent\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.614853 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/11c14fd8-7cc0-4f63-8900-c0ae7306d019-cni-binary-copy\") pod \"multus-additional-cni-plugins-c89xg\" (UID: \"11c14fd8-7cc0-4f63-8900-c0ae7306d019\") " pod="openshift-multus/multus-additional-cni-plugins-c89xg" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.614874 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-slash\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.614916 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-run-ovn-kubernetes\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.614936 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c2621562-4c91-40a3-ad72-29d325404496-ovnkube-script-lib\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.614957 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.614975 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/40ad063a-190c-4789-ab91-fb0909fde2ed-hosts-file\") pod \"node-resolver-rxssr\" (UID: \"40ad063a-190c-4789-ab91-fb0909fde2ed\") " pod="openshift-dns/node-resolver-rxssr" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615148 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-host-run-k8s-cni-cncf-io\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615180 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-hostroot\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615238 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gcjs\" (UniqueName: \"kubernetes.io/projected/c2621562-4c91-40a3-ad72-29d325404496-kube-api-access-9gcjs\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615261 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-host-var-lib-kubelet\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615329 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-multus-cni-dir\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615349 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0a99c6a2-c76c-4551-8e9f-a046e4723fe0-serviceca\") pod \"node-ca-n6hlf\" (UID: \"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\") " pod="openshift-image-registry/node-ca-n6hlf" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615366 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c2621562-4c91-40a3-ad72-29d325404496-ovnkube-config\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615450 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-host-run-netns\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615469 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-multus-conf-dir\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615485 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-node-log\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615508 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615525 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b724d1e1-9ded-434e-b852-f5233f27ef32-cni-binary-copy\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615556 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/11c14fd8-7cc0-4f63-8900-c0ae7306d019-os-release\") pod \"multus-additional-cni-plugins-c89xg\" (UID: \"11c14fd8-7cc0-4f63-8900-c0ae7306d019\") " pod="openshift-multus/multus-additional-cni-plugins-c89xg" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615575 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtcwd\" (UniqueName: \"kubernetes.io/projected/0a99c6a2-c76c-4551-8e9f-a046e4723fe0-kube-api-access-dtcwd\") pod \"node-ca-n6hlf\" (UID: \"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\") " pod="openshift-image-registry/node-ca-n6hlf" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615592 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615620 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/11c14fd8-7cc0-4f63-8900-c0ae7306d019-system-cni-dir\") pod \"multus-additional-cni-plugins-c89xg\" (UID: \"11c14fd8-7cc0-4f63-8900-c0ae7306d019\") " pod="openshift-multus/multus-additional-cni-plugins-c89xg" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615640 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/71e6a48b-4f5d-4299-9c7b-98dbe11e670e-rootfs\") pod \"machine-config-daemon-94w9l\" (UID: \"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\") " pod="openshift-machine-config-operator/machine-config-daemon-94w9l" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615665 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/11c14fd8-7cc0-4f63-8900-c0ae7306d019-tuning-conf-dir\") pod \"multus-additional-cni-plugins-c89xg\" (UID: \"11c14fd8-7cc0-4f63-8900-c0ae7306d019\") " pod="openshift-multus/multus-additional-cni-plugins-c89xg" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615681 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-log-socket\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615704 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-host-run-multus-certs\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615723 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-host-var-lib-cni-bin\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615740 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-host-var-lib-cni-multus\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615758 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-systemd-units\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615774 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-cni-bin\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615795 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615817 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615835 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-system-cni-dir\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615854 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-run-ovn\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615878 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615898 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/71e6a48b-4f5d-4299-9c7b-98dbe11e670e-mcd-auth-proxy-config\") pod \"machine-config-daemon-94w9l\" (UID: \"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\") " pod="openshift-machine-config-operator/machine-config-daemon-94w9l" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615928 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-cnibin\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615953 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-os-release\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615976 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615997 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmdv9\" (UniqueName: \"kubernetes.io/projected/40ad063a-190c-4789-ab91-fb0909fde2ed-kube-api-access-mmdv9\") pod \"node-resolver-rxssr\" (UID: \"40ad063a-190c-4789-ab91-fb0909fde2ed\") " pod="openshift-dns/node-resolver-rxssr" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.616019 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdw2r\" (UniqueName: \"kubernetes.io/projected/b724d1e1-9ded-434e-b852-f5233f27ef32-kube-api-access-hdw2r\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.616039 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.616083 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.616097 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.616117 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.616129 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:44:57Z","lastTransitionTime":"2025-12-13T03:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.616047 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/11c14fd8-7cc0-4f63-8900-c0ae7306d019-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-c89xg\" (UID: \"11c14fd8-7cc0-4f63-8900-c0ae7306d019\") " pod="openshift-multus/multus-additional-cni-plugins-c89xg" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.616450 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c2621562-4c91-40a3-ad72-29d325404496-ovn-node-metrics-cert\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.616508 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.616594 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-etc-kubernetes\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.616636 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b724d1e1-9ded-434e-b852-f5233f27ef32-multus-daemon-config\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.616821 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/11c14fd8-7cc0-4f63-8900-c0ae7306d019-cnibin\") pod \"multus-additional-cni-plugins-c89xg\" (UID: \"11c14fd8-7cc0-4f63-8900-c0ae7306d019\") " pod="openshift-multus/multus-additional-cni-plugins-c89xg" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.616857 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0a99c6a2-c76c-4551-8e9f-a046e4723fe0-host\") pod \"node-ca-n6hlf\" (UID: \"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\") " pod="openshift-image-registry/node-ca-n6hlf" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.616909 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-kubelet\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.616935 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.616969 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.616996 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-run-netns\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.617021 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-var-lib-openvswitch\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.617091 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.617117 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/71e6a48b-4f5d-4299-9c7b-98dbe11e670e-proxy-tls\") pod \"machine-config-daemon-94w9l\" (UID: \"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\") " pod="openshift-machine-config-operator/machine-config-daemon-94w9l" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.617146 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c2621562-4c91-40a3-ad72-29d325404496-env-overrides\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.617178 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.617210 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4ghr\" (UniqueName: \"kubernetes.io/projected/11c14fd8-7cc0-4f63-8900-c0ae7306d019-kube-api-access-z4ghr\") pod \"multus-additional-cni-plugins-c89xg\" (UID: \"11c14fd8-7cc0-4f63-8900-c0ae7306d019\") " pod="openshift-multus/multus-additional-cni-plugins-c89xg" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.617233 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-run-systemd\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.617267 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-etc-openvswitch\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.617293 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.617361 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.618558 4766 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.623214 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.612041 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.645541 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.645549 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607488 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 13 03:44:57 crc kubenswrapper[4766]: E1213 03:44:57.646228 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:44:58.146185516 +0000 UTC m=+29.656118480 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.646280 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.646187 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.646355 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.609848 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.646448 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.646890 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.610571 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.610999 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.611820 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.613805 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.614810 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615197 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615516 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.615908 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.616152 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.616256 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.616518 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.616848 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.617564 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.617844 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.618097 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.618237 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.618270 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.620325 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.620390 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.620598 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.620759 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.621021 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.621127 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.621286 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.621504 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.621510 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.621559 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.621920 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.622097 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.622175 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.622299 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.622383 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.622387 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.623139 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.623180 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.623239 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.623466 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.623877 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.624204 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.624276 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.625060 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.625294 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.625618 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.626002 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.626771 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.626816 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.626860 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.626888 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.627806 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.628567 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.628840 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.630456 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.630799 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.631327 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.631447 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.631533 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.631764 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.632372 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.632714 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.633102 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.633128 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.633532 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.634345 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.634625 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.634625 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.635623 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.639556 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.643740 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.644157 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: E1213 03:44:57.644775 4766 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.647502 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: E1213 03:44:57.647508 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 03:44:58.147481233 +0000 UTC m=+29.657414197 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.647163 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.647259 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.647607 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.645240 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.647805 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.647790 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.647826 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.647989 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.612973 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.648297 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.648405 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.648638 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.648723 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.648447 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.648826 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.648809 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.648812 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.648870 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.648890 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.648978 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.649034 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.649222 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.608140 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.649399 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.649547 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.649703 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.649787 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.649831 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.649934 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.650135 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.650308 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.650618 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.650734 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.651014 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.651122 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.652390 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.653616 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.653784 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.653861 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.654051 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.654834 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.656109 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.656276 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.656786 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.656814 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.657168 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.657323 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.657380 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.657715 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.657786 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.657985 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.658232 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.658286 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.658671 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.658780 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.659002 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.659099 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.659132 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.659559 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.659637 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.660011 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.660047 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.660172 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.660366 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.660503 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.660659 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.660726 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.661372 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.661701 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.662127 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.662158 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.662324 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.662367 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.662405 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.663025 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.663032 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.663107 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.663209 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.663464 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.663593 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.663766 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.664439 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.664725 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.664983 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.665267 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.665689 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.666028 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.668666 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.668921 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.670474 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.670746 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.670878 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.671070 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.671359 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.671509 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.670034 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.671835 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: E1213 03:44:57.672167 4766 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 03:44:57 crc kubenswrapper[4766]: E1213 03:44:57.680592 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 03:44:58.180538986 +0000 UTC m=+29.690471950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.682490 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.682857 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.683105 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.685225 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.685386 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.685672 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.686866 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.687166 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.687194 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.687588 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.688125 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.688176 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.688603 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.688949 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.689550 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.690034 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.690258 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.690470 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.692031 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.607469 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.692591 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: E1213 03:44:57.693337 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 03:44:57 crc kubenswrapper[4766]: E1213 03:44:57.693388 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 03:44:57 crc kubenswrapper[4766]: E1213 03:44:57.693403 4766 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.693606 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.693871 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 13 03:44:57 crc kubenswrapper[4766]: E1213 03:44:57.694162 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-13 03:44:58.19364327 +0000 UTC m=+29.703576234 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.694382 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.694917 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.695866 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.700067 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:57 crc kubenswrapper[4766]: E1213 03:44:57.704740 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 03:44:57 crc kubenswrapper[4766]: E1213 03:44:57.704783 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 03:44:57 crc kubenswrapper[4766]: E1213 03:44:57.704801 4766 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 03:44:57 crc kubenswrapper[4766]: E1213 03:44:57.704877 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-13 03:44:58.204849709 +0000 UTC m=+29.714782673 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.706764 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.706273 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.708976 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.713305 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.714004 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.717959 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/71e6a48b-4f5d-4299-9c7b-98dbe11e670e-mcd-auth-proxy-config\") pod \"machine-config-daemon-94w9l\" (UID: \"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\") " pod="openshift-machine-config-operator/machine-config-daemon-94w9l" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.717999 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-system-cni-dir\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.718020 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-run-ovn\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.718048 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmdv9\" (UniqueName: \"kubernetes.io/projected/40ad063a-190c-4789-ab91-fb0909fde2ed-kube-api-access-mmdv9\") pod \"node-resolver-rxssr\" (UID: \"40ad063a-190c-4789-ab91-fb0909fde2ed\") " pod="openshift-dns/node-resolver-rxssr" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.718066 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-cnibin\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.718081 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-os-release\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.718120 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-etc-kubernetes\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.718139 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdw2r\" (UniqueName: \"kubernetes.io/projected/b724d1e1-9ded-434e-b852-f5233f27ef32-kube-api-access-hdw2r\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.718163 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/11c14fd8-7cc0-4f63-8900-c0ae7306d019-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-c89xg\" (UID: \"11c14fd8-7cc0-4f63-8900-c0ae7306d019\") " pod="openshift-multus/multus-additional-cni-plugins-c89xg" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.718190 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c2621562-4c91-40a3-ad72-29d325404496-ovn-node-metrics-cert\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.718147 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-run-ovn\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.718277 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-system-cni-dir\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.718324 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-os-release\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.718281 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-cnibin\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.718208 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/11c14fd8-7cc0-4f63-8900-c0ae7306d019-cnibin\") pod \"multus-additional-cni-plugins-c89xg\" (UID: \"11c14fd8-7cc0-4f63-8900-c0ae7306d019\") " pod="openshift-multus/multus-additional-cni-plugins-c89xg" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.718243 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/11c14fd8-7cc0-4f63-8900-c0ae7306d019-cnibin\") pod \"multus-additional-cni-plugins-c89xg\" (UID: \"11c14fd8-7cc0-4f63-8900-c0ae7306d019\") " pod="openshift-multus/multus-additional-cni-plugins-c89xg" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.718470 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0a99c6a2-c76c-4551-8e9f-a046e4723fe0-host\") pod \"node-ca-n6hlf\" (UID: \"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\") " pod="openshift-image-registry/node-ca-n6hlf" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.718509 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-etc-kubernetes\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.718576 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-kubelet\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.718582 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0a99c6a2-c76c-4551-8e9f-a046e4723fe0-host\") pod \"node-ca-n6hlf\" (UID: \"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\") " pod="openshift-image-registry/node-ca-n6hlf" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.718615 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-kubelet\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.718649 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.718713 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.718742 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b724d1e1-9ded-434e-b852-f5233f27ef32-multus-daemon-config\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.718849 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/71e6a48b-4f5d-4299-9c7b-98dbe11e670e-proxy-tls\") pod \"machine-config-daemon-94w9l\" (UID: \"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\") " pod="openshift-machine-config-operator/machine-config-daemon-94w9l" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.718904 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-run-netns\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.718935 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-var-lib-openvswitch\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.718952 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c2621562-4c91-40a3-ad72-29d325404496-env-overrides\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.718968 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-run-systemd\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.718986 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-etc-openvswitch\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719002 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719024 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4ghr\" (UniqueName: \"kubernetes.io/projected/11c14fd8-7cc0-4f63-8900-c0ae7306d019-kube-api-access-z4ghr\") pod \"multus-additional-cni-plugins-c89xg\" (UID: \"11c14fd8-7cc0-4f63-8900-c0ae7306d019\") " pod="openshift-multus/multus-additional-cni-plugins-c89xg" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719042 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpg8t\" (UniqueName: \"kubernetes.io/projected/71e6a48b-4f5d-4299-9c7b-98dbe11e670e-kube-api-access-vpg8t\") pod \"machine-config-daemon-94w9l\" (UID: \"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\") " pod="openshift-machine-config-operator/machine-config-daemon-94w9l" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719057 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-multus-socket-dir-parent\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719080 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-run-openvswitch\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719096 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-cni-netd\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719123 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c2621562-4c91-40a3-ad72-29d325404496-ovnkube-script-lib\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719148 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/40ad063a-190c-4789-ab91-fb0909fde2ed-hosts-file\") pod \"node-resolver-rxssr\" (UID: \"40ad063a-190c-4789-ab91-fb0909fde2ed\") " pod="openshift-dns/node-resolver-rxssr" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719164 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-host-run-k8s-cni-cncf-io\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719182 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/11c14fd8-7cc0-4f63-8900-c0ae7306d019-cni-binary-copy\") pod \"multus-additional-cni-plugins-c89xg\" (UID: \"11c14fd8-7cc0-4f63-8900-c0ae7306d019\") " pod="openshift-multus/multus-additional-cni-plugins-c89xg" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719199 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-slash\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719218 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-run-ovn-kubernetes\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719244 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-host-var-lib-kubelet\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719262 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-hostroot\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719280 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gcjs\" (UniqueName: \"kubernetes.io/projected/c2621562-4c91-40a3-ad72-29d325404496-kube-api-access-9gcjs\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719299 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-multus-cni-dir\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719316 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0a99c6a2-c76c-4551-8e9f-a046e4723fe0-serviceca\") pod \"node-ca-n6hlf\" (UID: \"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\") " pod="openshift-image-registry/node-ca-n6hlf" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719333 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c2621562-4c91-40a3-ad72-29d325404496-ovnkube-config\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719353 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b724d1e1-9ded-434e-b852-f5233f27ef32-cni-binary-copy\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719370 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-host-run-netns\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719386 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-multus-conf-dir\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719404 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-node-log\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719411 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-multus-socket-dir-parent\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719420 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/11c14fd8-7cc0-4f63-8900-c0ae7306d019-system-cni-dir\") pod \"multus-additional-cni-plugins-c89xg\" (UID: \"11c14fd8-7cc0-4f63-8900-c0ae7306d019\") " pod="openshift-multus/multus-additional-cni-plugins-c89xg" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719459 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/11c14fd8-7cc0-4f63-8900-c0ae7306d019-os-release\") pod \"multus-additional-cni-plugins-c89xg\" (UID: \"11c14fd8-7cc0-4f63-8900-c0ae7306d019\") " pod="openshift-multus/multus-additional-cni-plugins-c89xg" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719732 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719757 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719769 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719776 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/11c14fd8-7cc0-4f63-8900-c0ae7306d019-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-c89xg\" (UID: \"11c14fd8-7cc0-4f63-8900-c0ae7306d019\") " pod="openshift-multus/multus-additional-cni-plugins-c89xg" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719788 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719825 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-slash\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719822 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:44:57Z","lastTransitionTime":"2025-12-13T03:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719860 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtcwd\" (UniqueName: \"kubernetes.io/projected/0a99c6a2-c76c-4551-8e9f-a046e4723fe0-kube-api-access-dtcwd\") pod \"node-ca-n6hlf\" (UID: \"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\") " pod="openshift-image-registry/node-ca-n6hlf" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719878 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-run-ovn-kubernetes\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719882 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719900 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-host-run-netns\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719911 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/71e6a48b-4f5d-4299-9c7b-98dbe11e670e-rootfs\") pod \"machine-config-daemon-94w9l\" (UID: \"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\") " pod="openshift-machine-config-operator/machine-config-daemon-94w9l" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719920 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-multus-conf-dir\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719933 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-host-run-multus-certs\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719938 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-node-log\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719954 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/11c14fd8-7cc0-4f63-8900-c0ae7306d019-system-cni-dir\") pod \"multus-additional-cni-plugins-c89xg\" (UID: \"11c14fd8-7cc0-4f63-8900-c0ae7306d019\") " pod="openshift-multus/multus-additional-cni-plugins-c89xg" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719954 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/11c14fd8-7cc0-4f63-8900-c0ae7306d019-tuning-conf-dir\") pod \"multus-additional-cni-plugins-c89xg\" (UID: \"11c14fd8-7cc0-4f63-8900-c0ae7306d019\") " pod="openshift-multus/multus-additional-cni-plugins-c89xg" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719982 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-log-socket\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.719998 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-cni-bin\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.720023 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-host-var-lib-cni-bin\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.720048 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-host-var-lib-cni-multus\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.720067 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-systemd-units\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.720077 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.720167 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-etc-openvswitch\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.720190 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-systemd-units\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.720217 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-log-socket\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.720228 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-host-var-lib-cni-bin\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.720229 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.720248 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/71e6a48b-4f5d-4299-9c7b-98dbe11e670e-rootfs\") pod \"machine-config-daemon-94w9l\" (UID: \"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\") " pod="openshift-machine-config-operator/machine-config-daemon-94w9l" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.720255 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-host-var-lib-cni-multus\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.720275 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-host-run-k8s-cni-cncf-io\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.720283 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/40ad063a-190c-4789-ab91-fb0909fde2ed-hosts-file\") pod \"node-resolver-rxssr\" (UID: \"40ad063a-190c-4789-ab91-fb0909fde2ed\") " pod="openshift-dns/node-resolver-rxssr" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.720331 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/11c14fd8-7cc0-4f63-8900-c0ae7306d019-tuning-conf-dir\") pod \"multus-additional-cni-plugins-c89xg\" (UID: \"11c14fd8-7cc0-4f63-8900-c0ae7306d019\") " pod="openshift-multus/multus-additional-cni-plugins-c89xg" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.720361 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-host-run-multus-certs\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.720525 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-host-var-lib-kubelet\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.720547 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-hostroot\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.721105 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/11c14fd8-7cc0-4f63-8900-c0ae7306d019-cni-binary-copy\") pod \"multus-additional-cni-plugins-c89xg\" (UID: \"11c14fd8-7cc0-4f63-8900-c0ae7306d019\") " pod="openshift-multus/multus-additional-cni-plugins-c89xg" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.721137 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-var-lib-openvswitch\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.721159 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-cni-bin\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.721312 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b724d1e1-9ded-434e-b852-f5233f27ef32-multus-cni-dir\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.721335 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-cni-netd\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.721355 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/71e6a48b-4f5d-4299-9c7b-98dbe11e670e-mcd-auth-proxy-config\") pod \"machine-config-daemon-94w9l\" (UID: \"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\") " pod="openshift-machine-config-operator/machine-config-daemon-94w9l" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.721372 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/11c14fd8-7cc0-4f63-8900-c0ae7306d019-os-release\") pod \"multus-additional-cni-plugins-c89xg\" (UID: \"11c14fd8-7cc0-4f63-8900-c0ae7306d019\") " pod="openshift-multus/multus-additional-cni-plugins-c89xg" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.721361 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b724d1e1-9ded-434e-b852-f5233f27ef32-multus-daemon-config\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.721403 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-run-systemd\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.721467 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-run-netns\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.721497 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-run-openvswitch\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.721831 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0a99c6a2-c76c-4551-8e9f-a046e4723fe0-serviceca\") pod \"node-ca-n6hlf\" (UID: \"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\") " pod="openshift-image-registry/node-ca-n6hlf" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.721849 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b724d1e1-9ded-434e-b852-f5233f27ef32-cni-binary-copy\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.721869 4766 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.721889 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.721912 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.721925 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.721937 4766 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.721950 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.721962 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722086 4766 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722105 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722163 4766 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722175 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722185 4766 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722274 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722284 4766 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722293 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722303 4766 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722313 4766 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722323 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722333 4766 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722350 4766 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722364 4766 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722377 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722391 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722405 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722414 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722437 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722453 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722464 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722478 4766 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722489 4766 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722497 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722511 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722521 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722531 4766 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722541 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722562 4766 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722586 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722603 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722617 4766 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722631 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722651 4766 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722661 4766 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722670 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722679 4766 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722690 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722700 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722709 4766 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722718 4766 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722727 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722736 4766 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722751 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722759 4766 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722768 4766 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722780 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722788 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722798 4766 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722807 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722816 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722825 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722834 4766 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722843 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722852 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722861 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722870 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722878 4766 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722887 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722895 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722910 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722923 4766 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722932 4766 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722941 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722949 4766 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722959 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.722975 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723014 4766 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723026 4766 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723035 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723046 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723074 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723084 4766 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723092 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723101 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723111 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723120 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723130 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723139 4766 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723152 4766 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723162 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723179 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723193 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723203 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723213 4766 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723225 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723313 4766 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723329 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723343 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723356 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723369 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723380 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723391 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723401 4766 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723411 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723421 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723445 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723455 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723465 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723468 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c2621562-4c91-40a3-ad72-29d325404496-ovn-node-metrics-cert\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723477 4766 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723520 4766 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723532 4766 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723544 4766 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723556 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723568 4766 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723580 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723590 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723603 4766 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723617 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723632 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723645 4766 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723657 4766 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723673 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723688 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723702 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723713 4766 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723727 4766 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723738 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723750 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723761 4766 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723772 4766 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723782 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723792 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723804 4766 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723813 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723823 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723834 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723843 4766 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723853 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723883 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723898 4766 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723912 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723926 4766 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.723941 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724360 4766 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724376 4766 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724387 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724397 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724411 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724445 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724458 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724470 4766 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724480 4766 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724491 4766 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724501 4766 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724515 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724524 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724534 4766 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724543 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724552 4766 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724561 4766 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724574 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724583 4766 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724593 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724602 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724612 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724622 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724835 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724855 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724867 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724878 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724922 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724934 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724946 4766 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724977 4766 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724989 4766 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724999 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.725011 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.725023 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.725035 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.724174 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.725541 4766 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.725567 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.725576 4766 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.725585 4766 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.725595 4766 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.725605 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.725614 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.725628 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.725637 4766 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.725743 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.725753 4766 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.725762 4766 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.725771 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.725780 4766 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.725790 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.734245 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c2621562-4c91-40a3-ad72-29d325404496-env-overrides\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.735164 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c2621562-4c91-40a3-ad72-29d325404496-ovnkube-script-lib\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.735811 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/71e6a48b-4f5d-4299-9c7b-98dbe11e670e-proxy-tls\") pod \"machine-config-daemon-94w9l\" (UID: \"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\") " pod="openshift-machine-config-operator/machine-config-daemon-94w9l" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.736567 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c2621562-4c91-40a3-ad72-29d325404496-ovnkube-config\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.738940 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.740494 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmdv9\" (UniqueName: \"kubernetes.io/projected/40ad063a-190c-4789-ab91-fb0909fde2ed-kube-api-access-mmdv9\") pod \"node-resolver-rxssr\" (UID: \"40ad063a-190c-4789-ab91-fb0909fde2ed\") " pod="openshift-dns/node-resolver-rxssr" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.740550 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdw2r\" (UniqueName: \"kubernetes.io/projected/b724d1e1-9ded-434e-b852-f5233f27ef32-kube-api-access-hdw2r\") pod \"multus-6n4vc\" (UID: \"b724d1e1-9ded-434e-b852-f5233f27ef32\") " pod="openshift-multus/multus-6n4vc" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.742110 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpg8t\" (UniqueName: \"kubernetes.io/projected/71e6a48b-4f5d-4299-9c7b-98dbe11e670e-kube-api-access-vpg8t\") pod \"machine-config-daemon-94w9l\" (UID: \"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\") " pod="openshift-machine-config-operator/machine-config-daemon-94w9l" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.742387 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtcwd\" (UniqueName: \"kubernetes.io/projected/0a99c6a2-c76c-4551-8e9f-a046e4723fe0-kube-api-access-dtcwd\") pod \"node-ca-n6hlf\" (UID: \"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\") " pod="openshift-image-registry/node-ca-n6hlf" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.743633 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.743642 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4ghr\" (UniqueName: \"kubernetes.io/projected/11c14fd8-7cc0-4f63-8900-c0ae7306d019-kube-api-access-z4ghr\") pod \"multus-additional-cni-plugins-c89xg\" (UID: \"11c14fd8-7cc0-4f63-8900-c0ae7306d019\") " pod="openshift-multus/multus-additional-cni-plugins-c89xg" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.747907 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gcjs\" (UniqueName: \"kubernetes.io/projected/c2621562-4c91-40a3-ad72-29d325404496-kube-api-access-9gcjs\") pod \"ovnkube-node-2dfkj\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.748639 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-n6hlf" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.750288 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.755338 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.760147 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.767329 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.771227 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-rxssr" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.781891 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.783317 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:44:57 crc kubenswrapper[4766]: W1213 03:44:57.796374 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40ad063a_190c_4789_ab91_fb0909fde2ed.slice/crio-ab44c27fb4153ff9d9d6f623c6672b8680b5ff6bf336b8dc004e77e84b56eba9 WatchSource:0}: Error finding container ab44c27fb4153ff9d9d6f623c6672b8680b5ff6bf336b8dc004e77e84b56eba9: Status 404 returned error can't find the container with id ab44c27fb4153ff9d9d6f623c6672b8680b5ff6bf336b8dc004e77e84b56eba9 Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.797839 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.825619 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.825688 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.825706 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.825731 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.825754 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:44:57Z","lastTransitionTime":"2025-12-13T03:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.826817 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.826851 4766 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.845394 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.861044 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.872832 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.889164 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.902585 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.923645 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.925345 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.937933 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.937974 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.938480 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.938492 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.938510 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.938760 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:44:57Z","lastTransitionTime":"2025-12-13T03:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.949625 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.960710 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.964757 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 13 03:44:57 crc kubenswrapper[4766]: I1213 03:44:57.984205 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:57 crc kubenswrapper[4766]: W1213 03:44:57.985115 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-9a3d9d24ff31b0f14bf2b547814f4153d7aabaf25943fbd53cb2fa866513289f WatchSource:0}: Error finding container 9a3d9d24ff31b0f14bf2b547814f4153d7aabaf25943fbd53cb2fa866513289f: Status 404 returned error can't find the container with id 9a3d9d24ff31b0f14bf2b547814f4153d7aabaf25943fbd53cb2fa866513289f Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.003253 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.009808 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.017276 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.020370 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6n4vc" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.036333 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-c89xg" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.044711 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.045282 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.045326 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.045340 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.045364 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.045378 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:44:58Z","lastTransitionTime":"2025-12-13T03:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.057874 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.066687 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:58 crc kubenswrapper[4766]: W1213 03:44:58.072473 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb724d1e1_9ded_434e_b852_f5233f27ef32.slice/crio-07d5bd43d6a735b58ff8550b1cf180e7869eaaa5fefcedf8ebdedd60c6e6bf11 WatchSource:0}: Error finding container 07d5bd43d6a735b58ff8550b1cf180e7869eaaa5fefcedf8ebdedd60c6e6bf11: Status 404 returned error can't find the container with id 07d5bd43d6a735b58ff8550b1cf180e7869eaaa5fefcedf8ebdedd60c6e6bf11 Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.148088 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.148120 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.148130 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.148147 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.148156 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:44:58Z","lastTransitionTime":"2025-12-13T03:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.230196 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.230780 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.230821 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.230845 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.230873 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:44:58 crc kubenswrapper[4766]: E1213 03:44:58.230966 4766 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 03:44:58 crc kubenswrapper[4766]: E1213 03:44:58.231015 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 03:44:59.231001862 +0000 UTC m=+30.740934826 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 03:44:58 crc kubenswrapper[4766]: E1213 03:44:58.231085 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:44:59.231075844 +0000 UTC m=+30.741008808 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:44:58 crc kubenswrapper[4766]: E1213 03:44:58.231192 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 03:44:58 crc kubenswrapper[4766]: E1213 03:44:58.231208 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 03:44:58 crc kubenswrapper[4766]: E1213 03:44:58.231222 4766 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 03:44:58 crc kubenswrapper[4766]: E1213 03:44:58.231268 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-13 03:44:59.231258799 +0000 UTC m=+30.741191763 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 03:44:58 crc kubenswrapper[4766]: E1213 03:44:58.231327 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 03:44:58 crc kubenswrapper[4766]: E1213 03:44:58.231343 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 03:44:58 crc kubenswrapper[4766]: E1213 03:44:58.231353 4766 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 03:44:58 crc kubenswrapper[4766]: E1213 03:44:58.231379 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-13 03:44:59.231371522 +0000 UTC m=+30.741304486 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 03:44:58 crc kubenswrapper[4766]: E1213 03:44:58.231464 4766 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 03:44:58 crc kubenswrapper[4766]: E1213 03:44:58.231500 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 03:44:59.231490106 +0000 UTC m=+30.741423070 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.325974 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.326026 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.326049 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.326068 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.326083 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:44:58Z","lastTransitionTime":"2025-12-13T03:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.418204 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6n4vc" event={"ID":"b724d1e1-9ded-434e-b852-f5233f27ef32","Type":"ContainerStarted","Data":"07d5bd43d6a735b58ff8550b1cf180e7869eaaa5fefcedf8ebdedd60c6e6bf11"} Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.426309 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" event={"ID":"c2621562-4c91-40a3-ad72-29d325404496","Type":"ContainerStarted","Data":"f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d"} Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.426369 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" event={"ID":"c2621562-4c91-40a3-ad72-29d325404496","Type":"ContainerStarted","Data":"984bccf61d56ed05945c6a552659c35dc72121c452716b8fe773aa569aa9aa69"} Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.429553 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.429600 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.429613 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.429633 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.429644 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:44:58Z","lastTransitionTime":"2025-12-13T03:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.430927 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" event={"ID":"71e6a48b-4f5d-4299-9c7b-98dbe11e670e","Type":"ContainerStarted","Data":"e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420"} Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.430992 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" event={"ID":"71e6a48b-4f5d-4299-9c7b-98dbe11e670e","Type":"ContainerStarted","Data":"2b8f02363d50ddccb826fe3999ec44ccf62feb68c146170172cb0740062ed667"} Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.433611 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" event={"ID":"11c14fd8-7cc0-4f63-8900-c0ae7306d019","Type":"ContainerStarted","Data":"d38762c1088f008e27f851d31570d58db5dcebd3cdd9fba899cee86136958785"} Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.434828 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1488f4e99f548531f8c9e603882857bb5962e4986786562ce718ccbe2582958f"} Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.438247 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.445211 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-rxssr" event={"ID":"40ad063a-190c-4789-ab91-fb0909fde2ed","Type":"ContainerStarted","Data":"617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07"} Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.445258 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-rxssr" event={"ID":"40ad063a-190c-4789-ab91-fb0909fde2ed","Type":"ContainerStarted","Data":"ab44c27fb4153ff9d9d6f623c6672b8680b5ff6bf336b8dc004e77e84b56eba9"} Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.448452 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d"} Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.448508 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0b5e486363f706c2729532c4d6e349d20af791af1ea184a719cd5612c302de26"} Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.449713 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-n6hlf" event={"ID":"0a99c6a2-c76c-4551-8e9f-a046e4723fe0","Type":"ContainerStarted","Data":"a2cde3e8d04ac5f831d45db7982e660eed8fa7a0fe903af935910e7bcea0d12b"} Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.450753 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.451004 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"9a3d9d24ff31b0f14bf2b547814f4153d7aabaf25943fbd53cb2fa866513289f"} Dec 13 03:44:58 crc kubenswrapper[4766]: E1213 03:44:58.456541 4766 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.461335 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.469830 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.482691 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.495843 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.511648 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.526280 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.532916 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.532971 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.532981 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.533001 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.533013 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:44:58Z","lastTransitionTime":"2025-12-13T03:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.541569 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.555035 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.576454 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.591697 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.612663 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.615694 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:44:58 crc kubenswrapper[4766]: E1213 03:44:58.615901 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.628542 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.641967 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.642018 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.642325 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.642341 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.642373 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.642390 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:44:58Z","lastTransitionTime":"2025-12-13T03:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.655153 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.680573 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.704958 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.746459 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.746533 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.746545 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.746563 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.746574 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:44:58Z","lastTransitionTime":"2025-12-13T03:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.751087 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.780716 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.792206 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.801519 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.819351 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.848998 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.850101 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.850151 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.850161 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.850181 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.850193 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:44:58Z","lastTransitionTime":"2025-12-13T03:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.923045 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.954027 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.954267 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.954294 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.954306 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.954329 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:44:58 crc kubenswrapper[4766]: I1213 03:44:58.954348 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:44:58Z","lastTransitionTime":"2025-12-13T03:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.057471 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.057522 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.057547 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.057587 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.057614 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:44:59Z","lastTransitionTime":"2025-12-13T03:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.162302 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.162352 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.162377 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.162421 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.162507 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:44:59Z","lastTransitionTime":"2025-12-13T03:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.322685 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.322842 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.322895 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.322922 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.322953 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:44:59 crc kubenswrapper[4766]: E1213 03:44:59.323150 4766 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 03:44:59 crc kubenswrapper[4766]: E1213 03:44:59.323356 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 03:44:59 crc kubenswrapper[4766]: E1213 03:44:59.323387 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 03:44:59 crc kubenswrapper[4766]: E1213 03:44:59.323407 4766 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 03:44:59 crc kubenswrapper[4766]: E1213 03:44:59.323518 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-13 03:45:01.323472612 +0000 UTC m=+32.833405576 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 03:44:59 crc kubenswrapper[4766]: E1213 03:44:59.323573 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 03:44:59 crc kubenswrapper[4766]: E1213 03:44:59.323608 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 03:44:59 crc kubenswrapper[4766]: E1213 03:44:59.323626 4766 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 03:44:59 crc kubenswrapper[4766]: E1213 03:44:59.323699 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-13 03:45:01.323676718 +0000 UTC m=+32.833609682 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 03:44:59 crc kubenswrapper[4766]: E1213 03:44:59.323952 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:45:01.323915335 +0000 UTC m=+32.833848299 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:44:59 crc kubenswrapper[4766]: E1213 03:44:59.324064 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 03:45:01.324052739 +0000 UTC m=+32.833985703 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 03:44:59 crc kubenswrapper[4766]: E1213 03:44:59.324216 4766 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 03:44:59 crc kubenswrapper[4766]: E1213 03:44:59.324339 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 03:45:01.324328987 +0000 UTC m=+32.834261951 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.344963 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.345049 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.345076 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.345110 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.345127 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:44:59Z","lastTransitionTime":"2025-12-13T03:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.448006 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.448054 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.448067 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.448088 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.448101 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:44:59Z","lastTransitionTime":"2025-12-13T03:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.469508 4766 generic.go:334] "Generic (PLEG): container finished" podID="c2621562-4c91-40a3-ad72-29d325404496" containerID="f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d" exitCode=0 Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.469593 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" event={"ID":"c2621562-4c91-40a3-ad72-29d325404496","Type":"ContainerDied","Data":"f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d"} Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.471454 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" event={"ID":"71e6a48b-4f5d-4299-9c7b-98dbe11e670e","Type":"ContainerStarted","Data":"462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05"} Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.473341 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-n6hlf" event={"ID":"0a99c6a2-c76c-4551-8e9f-a046e4723fe0","Type":"ContainerStarted","Data":"a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f"} Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.493089 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" event={"ID":"11c14fd8-7cc0-4f63-8900-c0ae7306d019","Type":"ContainerStarted","Data":"bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989"} Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.499131 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba"} Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.500844 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6n4vc" event={"ID":"b724d1e1-9ded-434e-b852-f5233f27ef32","Type":"ContainerStarted","Data":"ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0"} Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.502987 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd"} Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.564072 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.564114 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.564126 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.564151 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.564165 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:44:59Z","lastTransitionTime":"2025-12-13T03:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.615344 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:44:59 crc kubenswrapper[4766]: E1213 03:44:59.615510 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.615982 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:44:59 crc kubenswrapper[4766]: E1213 03:44:59.616042 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.620077 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.621478 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.623359 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.624151 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.626482 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.627149 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.628209 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.636939 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.637756 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.639228 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.639861 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.652658 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.653577 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.654780 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.655905 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.656561 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.658805 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.659328 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.660036 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.661293 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.662159 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.663884 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.664394 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.665561 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.666154 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.667074 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.668479 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.670820 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.670858 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.670871 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.670895 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.670913 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:44:59Z","lastTransitionTime":"2025-12-13T03:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.671496 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.672286 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.673739 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.674505 4766 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.674629 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.676543 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.677548 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.678010 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.679667 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.680710 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.681246 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.682507 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.683195 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.684292 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.685327 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.686015 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.687648 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.688156 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.689789 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.690502 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.691723 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.693025 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.693589 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.694408 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.696255 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.697027 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.697710 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.705446 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.773476 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.773518 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.773530 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.773553 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.773568 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:44:59Z","lastTransitionTime":"2025-12-13T03:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.829174 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.876089 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.876136 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.876146 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.876164 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.876176 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:44:59Z","lastTransitionTime":"2025-12-13T03:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.887512 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.903837 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.918561 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.931710 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.944442 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.958212 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.977041 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.979697 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.979887 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.980043 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.980139 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.980248 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:44:59Z","lastTransitionTime":"2025-12-13T03:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:44:59 crc kubenswrapper[4766]: I1213 03:44:59.991483 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.009567 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.038657 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:00Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.056377 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:00Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.076330 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:00Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.083573 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.083631 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.083645 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.083673 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.083688 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:00Z","lastTransitionTime":"2025-12-13T03:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.089150 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:00Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.103878 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:00Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.116848 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:00Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.135041 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:00Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.151371 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:00Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.168660 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:00Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.183041 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:00Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.186544 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.186605 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.186620 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.186644 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.186662 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:00Z","lastTransitionTime":"2025-12-13T03:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.202559 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:00Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.248064 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:00Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.274156 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:00Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.290195 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.290237 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.290247 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.290264 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.290273 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:00Z","lastTransitionTime":"2025-12-13T03:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.292336 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:00Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.313122 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:00Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.327854 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:00Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.342938 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:00Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.358689 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:00Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.377745 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:00Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.394537 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:00Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.394912 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.395031 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.395045 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.395081 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.395096 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:00Z","lastTransitionTime":"2025-12-13T03:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.415039 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:00Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.489370 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:00Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.500452 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.500512 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.500526 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.500556 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.500572 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:00Z","lastTransitionTime":"2025-12-13T03:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.504203 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:00Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.511408 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" event={"ID":"c2621562-4c91-40a3-ad72-29d325404496","Type":"ContainerStarted","Data":"a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428"} Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.511484 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" event={"ID":"c2621562-4c91-40a3-ad72-29d325404496","Type":"ContainerStarted","Data":"4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8"} Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.523243 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:00Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.535696 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:00Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.553023 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:00Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.571753 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:00Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.590780 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:00Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.623308 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:00 crc kubenswrapper[4766]: E1213 03:45:00.623648 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.625781 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.625842 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.625853 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.625874 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.625886 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:00Z","lastTransitionTime":"2025-12-13T03:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.729081 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.729125 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.729137 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.729157 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.729167 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:00Z","lastTransitionTime":"2025-12-13T03:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.834538 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.835072 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.835090 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.835113 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.835126 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:00Z","lastTransitionTime":"2025-12-13T03:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.979239 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.979305 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.979317 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.979351 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:00 crc kubenswrapper[4766]: I1213 03:45:00.979364 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:00Z","lastTransitionTime":"2025-12-13T03:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.083124 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.083183 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.083196 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.083217 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.083489 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:01Z","lastTransitionTime":"2025-12-13T03:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.186464 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.186513 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.186522 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.186541 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.186551 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:01Z","lastTransitionTime":"2025-12-13T03:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.289415 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.289505 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.289519 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.289540 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.289553 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:01Z","lastTransitionTime":"2025-12-13T03:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.389310 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.389517 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:01 crc kubenswrapper[4766]: E1213 03:45:01.389555 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:45:05.389522639 +0000 UTC m=+36.899455603 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.389624 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.389695 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:01 crc kubenswrapper[4766]: E1213 03:45:01.389738 4766 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 03:45:01 crc kubenswrapper[4766]: E1213 03:45:01.389810 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 03:45:01 crc kubenswrapper[4766]: E1213 03:45:01.389839 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 03:45:01 crc kubenswrapper[4766]: E1213 03:45:01.389839 4766 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 03:45:01 crc kubenswrapper[4766]: E1213 03:45:01.389854 4766 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 03:45:01 crc kubenswrapper[4766]: E1213 03:45:01.389890 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 03:45:05.389859609 +0000 UTC m=+36.899792573 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.389755 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:01 crc kubenswrapper[4766]: E1213 03:45:01.389924 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 03:45:05.38990089 +0000 UTC m=+36.899833854 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 03:45:01 crc kubenswrapper[4766]: E1213 03:45:01.389946 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-13 03:45:05.389936071 +0000 UTC m=+36.899869035 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 03:45:01 crc kubenswrapper[4766]: E1213 03:45:01.390081 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 03:45:01 crc kubenswrapper[4766]: E1213 03:45:01.390170 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 03:45:01 crc kubenswrapper[4766]: E1213 03:45:01.390190 4766 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 03:45:01 crc kubenswrapper[4766]: E1213 03:45:01.390294 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-13 03:45:05.39024813 +0000 UTC m=+36.900181274 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.398399 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.398474 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.398488 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.398510 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.398522 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:01Z","lastTransitionTime":"2025-12-13T03:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.501027 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.501357 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.501382 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.501407 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.501420 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:01Z","lastTransitionTime":"2025-12-13T03:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.518926 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" event={"ID":"c2621562-4c91-40a3-ad72-29d325404496","Type":"ContainerStarted","Data":"091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd"} Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.519368 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" event={"ID":"c2621562-4c91-40a3-ad72-29d325404496","Type":"ContainerStarted","Data":"bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910"} Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.519483 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" event={"ID":"c2621562-4c91-40a3-ad72-29d325404496","Type":"ContainerStarted","Data":"f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839"} Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.520891 4766 generic.go:334] "Generic (PLEG): container finished" podID="11c14fd8-7cc0-4f63-8900-c0ae7306d019" containerID="bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989" exitCode=0 Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.521043 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" event={"ID":"11c14fd8-7cc0-4f63-8900-c0ae7306d019","Type":"ContainerDied","Data":"bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989"} Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.542738 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:01Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.559155 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:01Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.575294 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:01Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.597950 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:01Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.617373 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.617804 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:01 crc kubenswrapper[4766]: E1213 03:45:01.617688 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:45:01 crc kubenswrapper[4766]: E1213 03:45:01.618126 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.621191 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:01Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.635202 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.635512 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.635589 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.635683 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.635750 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:01Z","lastTransitionTime":"2025-12-13T03:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.645343 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:01Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.659999 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:01Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.675051 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:01Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.688771 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:01Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.720454 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:01Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.737345 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:01Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.744173 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.744239 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.744252 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.744271 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.744281 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:01Z","lastTransitionTime":"2025-12-13T03:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.755800 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:01Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.772830 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:01Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.847757 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.847791 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.847822 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.847851 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.847865 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:01Z","lastTransitionTime":"2025-12-13T03:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.948505 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.952505 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.952545 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.952556 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.952584 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.952599 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:01Z","lastTransitionTime":"2025-12-13T03:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.985675 4766 scope.go:117] "RemoveContainer" containerID="1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3" Dec 13 03:45:01 crc kubenswrapper[4766]: E1213 03:45:01.986461 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Dec 13 03:45:01 crc kubenswrapper[4766]: I1213 03:45:01.988961 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.055710 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.055772 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.055785 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.055803 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.055814 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:02Z","lastTransitionTime":"2025-12-13T03:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.158970 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.159018 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.159031 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.159052 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.159066 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:02Z","lastTransitionTime":"2025-12-13T03:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.341713 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.341746 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.341756 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.341778 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.341789 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:02Z","lastTransitionTime":"2025-12-13T03:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.445281 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.445311 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.445323 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.445340 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.445348 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:02Z","lastTransitionTime":"2025-12-13T03:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.541719 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" event={"ID":"c2621562-4c91-40a3-ad72-29d325404496","Type":"ContainerStarted","Data":"7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8"} Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.543714 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0"} Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.550359 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.550412 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.550422 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.550458 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.550469 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:02Z","lastTransitionTime":"2025-12-13T03:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.551532 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" event={"ID":"11c14fd8-7cc0-4f63-8900-c0ae7306d019","Type":"ContainerStarted","Data":"05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7"} Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.552185 4766 scope.go:117] "RemoveContainer" containerID="1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3" Dec 13 03:45:02 crc kubenswrapper[4766]: E1213 03:45:02.552370 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.562062 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:02Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.572868 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:02Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.587833 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:02Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.615789 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:02 crc kubenswrapper[4766]: E1213 03:45:02.615979 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.627246 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:02Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.648159 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:02Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.654921 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.655054 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.655125 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.655836 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.656024 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:02Z","lastTransitionTime":"2025-12-13T03:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.659370 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:02Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.681581 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:02Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.697296 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb2779e9-fd47-4ea6-a75c-b0c24339b1c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T03:44:53Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 03:44:52.250326 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 03:44:52.250536 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 03:44:52.251699 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1102693067/tls.crt::/tmp/serving-cert-1102693067/tls.key\\\\\\\"\\\\nI1213 03:44:52.913338 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 03:44:52.918333 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 03:44:52.918373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 03:44:52.918488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 03:44:52.918505 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 03:44:52.925473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 03:44:52.925500 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925505 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 03:44:52.925512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 03:44:52.925515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 03:44:52.925518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 03:44:52.926233 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 03:44:52.929662 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:02Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.728377 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:02Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.752812 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:02Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.760127 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.760167 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.760182 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.760200 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.760211 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:02Z","lastTransitionTime":"2025-12-13T03:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.770748 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:02Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.786177 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:02Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.860107 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:02Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.879175 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.879231 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.879241 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.879265 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.879284 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:02Z","lastTransitionTime":"2025-12-13T03:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.897991 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:02Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.937521 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:02Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.996781 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb2779e9-fd47-4ea6-a75c-b0c24339b1c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T03:44:53Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 03:44:52.250326 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 03:44:52.250536 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 03:44:52.251699 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1102693067/tls.crt::/tmp/serving-cert-1102693067/tls.key\\\\\\\"\\\\nI1213 03:44:52.913338 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 03:44:52.918333 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 03:44:52.918373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 03:44:52.918488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 03:44:52.918505 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 03:44:52.925473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 03:44:52.925500 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925505 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 03:44:52.925512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 03:44:52.925515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 03:44:52.925518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 03:44:52.926233 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 03:44:52.929662 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:02Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.999108 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.999186 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.999199 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:02 crc kubenswrapper[4766]: I1213 03:45:02.999239 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:02.999257 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:02Z","lastTransitionTime":"2025-12-13T03:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:03 crc kubenswrapper[4766]: E1213 03:45:03.019323 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.019873 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.023559 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.023592 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.023603 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.023621 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.023635 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:03Z","lastTransitionTime":"2025-12-13T03:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.046887 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.101961 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:03 crc kubenswrapper[4766]: E1213 03:45:03.103112 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.115656 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.115717 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.115730 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.115748 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.115759 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:03Z","lastTransitionTime":"2025-12-13T03:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.148602 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:03 crc kubenswrapper[4766]: E1213 03:45:03.175391 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.323955 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.324104 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.324120 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.324144 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.324159 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:03Z","lastTransitionTime":"2025-12-13T03:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.326646 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:03 crc kubenswrapper[4766]: E1213 03:45:03.341761 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.344387 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.346623 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.346661 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.346676 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.346695 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.346966 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:03Z","lastTransitionTime":"2025-12-13T03:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.357751 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:03 crc kubenswrapper[4766]: E1213 03:45:03.359478 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:03 crc kubenswrapper[4766]: E1213 03:45:03.359586 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.361295 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.361317 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.361324 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.361341 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.361350 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:03Z","lastTransitionTime":"2025-12-13T03:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.371130 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.386794 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.400740 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.415051 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.428755 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.464043 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.464097 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.464110 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.464156 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.464174 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:03Z","lastTransitionTime":"2025-12-13T03:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.566872 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.567356 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.567368 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.567387 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.567400 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:03Z","lastTransitionTime":"2025-12-13T03:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.617859 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:03 crc kubenswrapper[4766]: E1213 03:45:03.618023 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.618579 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:03 crc kubenswrapper[4766]: E1213 03:45:03.618714 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.670600 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.670646 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.670665 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.670697 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.670732 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:03Z","lastTransitionTime":"2025-12-13T03:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.774453 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.774488 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.774500 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.774516 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.774527 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:03Z","lastTransitionTime":"2025-12-13T03:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.877465 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.877544 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.877558 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.877582 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.877595 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:03Z","lastTransitionTime":"2025-12-13T03:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.981762 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.981815 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.981827 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.981846 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:03 crc kubenswrapper[4766]: I1213 03:45:03.981858 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:03Z","lastTransitionTime":"2025-12-13T03:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.084620 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.084691 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.084702 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.084723 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.084737 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:04Z","lastTransitionTime":"2025-12-13T03:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.187664 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.187713 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.187723 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.187742 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.187756 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:04Z","lastTransitionTime":"2025-12-13T03:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.291605 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.291718 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.291734 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.291758 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.291772 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:04Z","lastTransitionTime":"2025-12-13T03:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.395454 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.395519 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.395540 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.395571 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.395591 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:04Z","lastTransitionTime":"2025-12-13T03:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.506787 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.506832 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.506845 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.506867 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.506881 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:04Z","lastTransitionTime":"2025-12-13T03:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.611549 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.611623 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.611641 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.611683 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.611707 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:04Z","lastTransitionTime":"2025-12-13T03:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.615898 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:04 crc kubenswrapper[4766]: E1213 03:45:04.616095 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.794592 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.794656 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.794669 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.794692 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.794704 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:04Z","lastTransitionTime":"2025-12-13T03:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.897169 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.897206 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.897217 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.897234 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.897245 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:04Z","lastTransitionTime":"2025-12-13T03:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.999630 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.999689 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:04 crc kubenswrapper[4766]: I1213 03:45:04.999700 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:04.999720 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:04.999733 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:04Z","lastTransitionTime":"2025-12-13T03:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.103158 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.103211 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.103227 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.103245 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.103257 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:05Z","lastTransitionTime":"2025-12-13T03:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.206758 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.206812 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.206835 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.206855 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.206867 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:05Z","lastTransitionTime":"2025-12-13T03:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.309856 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.309924 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.309990 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.310023 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.310041 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:05Z","lastTransitionTime":"2025-12-13T03:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.412632 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.413080 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.413092 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.413112 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.413123 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:05Z","lastTransitionTime":"2025-12-13T03:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.476911 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.477032 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.477069 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.477088 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.477109 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:05 crc kubenswrapper[4766]: E1213 03:45:05.477176 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:45:13.477143946 +0000 UTC m=+44.987076910 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:45:05 crc kubenswrapper[4766]: E1213 03:45:05.477297 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 03:45:05 crc kubenswrapper[4766]: E1213 03:45:05.477314 4766 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 03:45:05 crc kubenswrapper[4766]: E1213 03:45:05.477325 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 03:45:05 crc kubenswrapper[4766]: E1213 03:45:05.477327 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 03:45:05 crc kubenswrapper[4766]: E1213 03:45:05.477338 4766 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 03:45:05 crc kubenswrapper[4766]: E1213 03:45:05.477354 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 03:45:05 crc kubenswrapper[4766]: E1213 03:45:05.477368 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 03:45:13.477358602 +0000 UTC m=+44.987291566 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 03:45:05 crc kubenswrapper[4766]: E1213 03:45:05.477372 4766 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 03:45:05 crc kubenswrapper[4766]: E1213 03:45:05.477358 4766 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 03:45:05 crc kubenswrapper[4766]: E1213 03:45:05.477401 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-13 03:45:13.477385943 +0000 UTC m=+44.987318917 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 03:45:05 crc kubenswrapper[4766]: E1213 03:45:05.477542 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-13 03:45:13.477519787 +0000 UTC m=+44.987452771 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 03:45:05 crc kubenswrapper[4766]: E1213 03:45:05.477569 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 03:45:13.477558058 +0000 UTC m=+44.987491122 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.516793 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.516967 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.517012 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.517050 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.517070 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:05Z","lastTransitionTime":"2025-12-13T03:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.579221 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" event={"ID":"c2621562-4c91-40a3-ad72-29d325404496","Type":"ContainerStarted","Data":"b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1"} Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.581687 4766 generic.go:334] "Generic (PLEG): container finished" podID="11c14fd8-7cc0-4f63-8900-c0ae7306d019" containerID="05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7" exitCode=0 Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.581752 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" event={"ID":"11c14fd8-7cc0-4f63-8900-c0ae7306d019","Type":"ContainerDied","Data":"05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7"} Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.599806 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:05Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.617572 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:05 crc kubenswrapper[4766]: E1213 03:45:05.617702 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.617992 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:05 crc kubenswrapper[4766]: E1213 03:45:05.618041 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.682595 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.682643 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.682655 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.682674 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.682686 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:05Z","lastTransitionTime":"2025-12-13T03:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.687295 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:05Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.705583 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:05Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.722624 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:05Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.739739 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:05Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.753614 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:05Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.771826 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:05Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.789221 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.789259 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.789271 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.789290 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.789302 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:05Z","lastTransitionTime":"2025-12-13T03:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.797123 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:05Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.816557 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:05Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.892232 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.892273 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.892283 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.892300 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.892311 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:05Z","lastTransitionTime":"2025-12-13T03:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.895312 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:05Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.911120 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:05Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.947565 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:05Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.966214 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb2779e9-fd47-4ea6-a75c-b0c24339b1c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T03:44:53Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 03:44:52.250326 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 03:44:52.250536 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 03:44:52.251699 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1102693067/tls.crt::/tmp/serving-cert-1102693067/tls.key\\\\\\\"\\\\nI1213 03:44:52.913338 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 03:44:52.918333 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 03:44:52.918373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 03:44:52.918488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 03:44:52.918505 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 03:44:52.925473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 03:44:52.925500 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925505 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 03:44:52.925512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 03:44:52.925515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 03:44:52.925518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 03:44:52.926233 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 03:44:52.929662 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:05Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.985634 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:05Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.995502 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.995546 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.995561 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.995578 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:05 crc kubenswrapper[4766]: I1213 03:45:05.995590 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:05Z","lastTransitionTime":"2025-12-13T03:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.098716 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.098773 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.098788 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.098810 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.098825 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:06Z","lastTransitionTime":"2025-12-13T03:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.406155 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.406227 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.406245 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.406264 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.406280 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:06Z","lastTransitionTime":"2025-12-13T03:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.509059 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.509356 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.509512 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.509593 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.509678 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:06Z","lastTransitionTime":"2025-12-13T03:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.588728 4766 generic.go:334] "Generic (PLEG): container finished" podID="11c14fd8-7cc0-4f63-8900-c0ae7306d019" containerID="7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041" exitCode=0 Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.588805 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" event={"ID":"11c14fd8-7cc0-4f63-8900-c0ae7306d019","Type":"ContainerDied","Data":"7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041"} Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.613055 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.613100 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.613111 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.613130 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.613141 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:06Z","lastTransitionTime":"2025-12-13T03:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.615466 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:06 crc kubenswrapper[4766]: E1213 03:45:06.615672 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.615587 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:06Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.630312 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb2779e9-fd47-4ea6-a75c-b0c24339b1c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T03:44:53Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 03:44:52.250326 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 03:44:52.250536 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 03:44:52.251699 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1102693067/tls.crt::/tmp/serving-cert-1102693067/tls.key\\\\\\\"\\\\nI1213 03:44:52.913338 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 03:44:52.918333 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 03:44:52.918373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 03:44:52.918488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 03:44:52.918505 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 03:44:52.925473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 03:44:52.925500 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925505 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 03:44:52.925512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 03:44:52.925515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 03:44:52.925518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 03:44:52.926233 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 03:44:52.929662 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:06Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.645721 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:06Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.663100 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:06Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.678254 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:06Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.692799 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:06Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.707255 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:06Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.715670 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.715735 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.715749 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.715786 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.715798 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:06Z","lastTransitionTime":"2025-12-13T03:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.723209 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:06Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.736721 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:06Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.748179 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:06Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.762762 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:06Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.779694 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:06Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.793266 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:06Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.805105 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:06Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.818497 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.818564 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.818578 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.818604 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.818617 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:06Z","lastTransitionTime":"2025-12-13T03:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.921403 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.921472 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.921486 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.921508 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:06 crc kubenswrapper[4766]: I1213 03:45:06.921523 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:06Z","lastTransitionTime":"2025-12-13T03:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.024104 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.024154 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.024169 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.024189 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.024202 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:07Z","lastTransitionTime":"2025-12-13T03:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.127070 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.127106 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.127116 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.127134 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.127144 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:07Z","lastTransitionTime":"2025-12-13T03:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.230857 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.230906 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.230916 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.230935 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.230951 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:07Z","lastTransitionTime":"2025-12-13T03:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.334289 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.334355 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.334366 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.334414 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.334456 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:07Z","lastTransitionTime":"2025-12-13T03:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.438929 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.439543 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.439636 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.439772 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.439865 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:07Z","lastTransitionTime":"2025-12-13T03:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.544565 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.544600 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.544612 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.544722 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.544762 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:07Z","lastTransitionTime":"2025-12-13T03:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.596051 4766 generic.go:334] "Generic (PLEG): container finished" podID="11c14fd8-7cc0-4f63-8900-c0ae7306d019" containerID="fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6" exitCode=0 Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.596110 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" event={"ID":"11c14fd8-7cc0-4f63-8900-c0ae7306d019","Type":"ContainerDied","Data":"fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6"} Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.612317 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:07Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.615349 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.615466 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:07 crc kubenswrapper[4766]: E1213 03:45:07.615508 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:45:07 crc kubenswrapper[4766]: E1213 03:45:07.615639 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.626207 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:07Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.639887 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:07Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.648722 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.648766 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.648777 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.648796 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.648808 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:07Z","lastTransitionTime":"2025-12-13T03:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.652419 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:07Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.666837 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:07Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.678559 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:07Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.691153 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:07Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.702824 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:07Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.715281 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:07Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.728131 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:07Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.746110 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:07Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.751235 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.751390 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.751483 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.751518 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.751536 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:07Z","lastTransitionTime":"2025-12-13T03:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.761875 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb2779e9-fd47-4ea6-a75c-b0c24339b1c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T03:44:53Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 03:44:52.250326 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 03:44:52.250536 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 03:44:52.251699 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1102693067/tls.crt::/tmp/serving-cert-1102693067/tls.key\\\\\\\"\\\\nI1213 03:44:52.913338 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 03:44:52.918333 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 03:44:52.918373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 03:44:52.918488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 03:44:52.918505 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 03:44:52.925473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 03:44:52.925500 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925505 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 03:44:52.925512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 03:44:52.925515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 03:44:52.925518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 03:44:52.926233 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 03:44:52.929662 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:07Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.776109 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:07Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.790826 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:07Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.854399 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.854598 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.854625 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.854645 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.854658 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:07Z","lastTransitionTime":"2025-12-13T03:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.959010 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.959106 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.959125 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.959154 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:07 crc kubenswrapper[4766]: I1213 03:45:07.959174 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:07Z","lastTransitionTime":"2025-12-13T03:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.062328 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.062395 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.062411 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.062449 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.062475 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:08Z","lastTransitionTime":"2025-12-13T03:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.194286 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.194369 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.194398 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.194457 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.194481 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:08Z","lastTransitionTime":"2025-12-13T03:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.297319 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.297362 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.297374 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.297419 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.297445 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:08Z","lastTransitionTime":"2025-12-13T03:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.518612 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.518667 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.518685 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.518719 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.518735 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:08Z","lastTransitionTime":"2025-12-13T03:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.712157 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:08 crc kubenswrapper[4766]: E1213 03:45:08.712363 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.788329 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.788374 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.788387 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.788412 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.788442 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:08Z","lastTransitionTime":"2025-12-13T03:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.797172 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" event={"ID":"c2621562-4c91-40a3-ad72-29d325404496","Type":"ContainerStarted","Data":"7b9a47ec54cae6bb8656e42d032c74a0fbc1ede2ebd302ab768bc374cddfe441"} Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.799317 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.799344 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.845088 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:08Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.870459 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:08Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.887808 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:08Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.903448 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:08Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.909787 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.910832 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.910944 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.910962 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.910986 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.911016 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:08Z","lastTransitionTime":"2025-12-13T03:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.918255 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:08Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.934201 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:08Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.934712 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:45:08 crc kubenswrapper[4766]: I1213 03:45:08.956799 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b9a47ec54cae6bb8656e42d032c74a0fbc1ede2ebd302ab768bc374cddfe441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:08Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.009530 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb2779e9-fd47-4ea6-a75c-b0c24339b1c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T03:44:53Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 03:44:52.250326 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 03:44:52.250536 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 03:44:52.251699 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1102693067/tls.crt::/tmp/serving-cert-1102693067/tls.key\\\\\\\"\\\\nI1213 03:44:52.913338 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 03:44:52.918333 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 03:44:52.918373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 03:44:52.918488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 03:44:52.918505 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 03:44:52.925473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 03:44:52.925500 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925505 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 03:44:52.925512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 03:44:52.925515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 03:44:52.925518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 03:44:52.926233 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 03:44:52.929662 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:08Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.014033 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.014106 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.014121 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.014139 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.014149 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:09Z","lastTransitionTime":"2025-12-13T03:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.027971 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.042807 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.059155 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.080744 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.100032 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.140686 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.140733 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.140746 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.140767 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.140784 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:09Z","lastTransitionTime":"2025-12-13T03:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.146287 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.159901 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.180297 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.181174 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb"] Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.181902 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.184277 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.184277 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.188998 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4b40427c-c501-4c74-a7e3-2e6f1343bc03-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ww5fb\" (UID: \"4b40427c-c501-4c74-a7e3-2e6f1343bc03\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.189051 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfkml\" (UniqueName: \"kubernetes.io/projected/4b40427c-c501-4c74-a7e3-2e6f1343bc03-kube-api-access-mfkml\") pod \"ovnkube-control-plane-749d76644c-ww5fb\" (UID: \"4b40427c-c501-4c74-a7e3-2e6f1343bc03\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.189118 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4b40427c-c501-4c74-a7e3-2e6f1343bc03-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ww5fb\" (UID: \"4b40427c-c501-4c74-a7e3-2e6f1343bc03\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.189155 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4b40427c-c501-4c74-a7e3-2e6f1343bc03-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ww5fb\" (UID: \"4b40427c-c501-4c74-a7e3-2e6f1343bc03\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.199394 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.225402 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.243836 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.243893 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.243905 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.243926 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.243941 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:09Z","lastTransitionTime":"2025-12-13T03:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.247913 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.264847 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.285100 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.289843 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4b40427c-c501-4c74-a7e3-2e6f1343bc03-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ww5fb\" (UID: \"4b40427c-c501-4c74-a7e3-2e6f1343bc03\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.289906 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4b40427c-c501-4c74-a7e3-2e6f1343bc03-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ww5fb\" (UID: \"4b40427c-c501-4c74-a7e3-2e6f1343bc03\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.289938 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4b40427c-c501-4c74-a7e3-2e6f1343bc03-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ww5fb\" (UID: \"4b40427c-c501-4c74-a7e3-2e6f1343bc03\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.289960 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfkml\" (UniqueName: \"kubernetes.io/projected/4b40427c-c501-4c74-a7e3-2e6f1343bc03-kube-api-access-mfkml\") pod \"ovnkube-control-plane-749d76644c-ww5fb\" (UID: \"4b40427c-c501-4c74-a7e3-2e6f1343bc03\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.291366 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4b40427c-c501-4c74-a7e3-2e6f1343bc03-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ww5fb\" (UID: \"4b40427c-c501-4c74-a7e3-2e6f1343bc03\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.291505 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4b40427c-c501-4c74-a7e3-2e6f1343bc03-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ww5fb\" (UID: \"4b40427c-c501-4c74-a7e3-2e6f1343bc03\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.305233 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.306906 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4b40427c-c501-4c74-a7e3-2e6f1343bc03-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ww5fb\" (UID: \"4b40427c-c501-4c74-a7e3-2e6f1343bc03\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.331008 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.331553 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfkml\" (UniqueName: \"kubernetes.io/projected/4b40427c-c501-4c74-a7e3-2e6f1343bc03-kube-api-access-mfkml\") pod \"ovnkube-control-plane-749d76644c-ww5fb\" (UID: \"4b40427c-c501-4c74-a7e3-2e6f1343bc03\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.374813 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.374894 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.375040 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.375268 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.376464 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:09Z","lastTransitionTime":"2025-12-13T03:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.393619 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b9a47ec54cae6bb8656e42d032c74a0fbc1ede2ebd302ab768bc374cddfe441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.420507 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb2779e9-fd47-4ea6-a75c-b0c24339b1c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T03:44:53Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 03:44:52.250326 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 03:44:52.250536 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 03:44:52.251699 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1102693067/tls.crt::/tmp/serving-cert-1102693067/tls.key\\\\\\\"\\\\nI1213 03:44:52.913338 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 03:44:52.918333 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 03:44:52.918373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 03:44:52.918488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 03:44:52.918505 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 03:44:52.925473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 03:44:52.925500 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925505 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 03:44:52.925512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 03:44:52.925515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 03:44:52.925518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 03:44:52.926233 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 03:44:52.929662 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.455448 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.472555 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.481916 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.481955 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.481969 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.481990 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.482004 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:09Z","lastTransitionTime":"2025-12-13T03:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.490912 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.501043 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.506294 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: W1213 03:45:09.521553 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b40427c_c501_4c74_a7e3_2e6f1343bc03.slice/crio-a557a225dccd0910d6eb21cf8f71883985172c2e9d6fd4714fcd30f47aa1d532 WatchSource:0}: Error finding container a557a225dccd0910d6eb21cf8f71883985172c2e9d6fd4714fcd30f47aa1d532: Status 404 returned error can't find the container with id a557a225dccd0910d6eb21cf8f71883985172c2e9d6fd4714fcd30f47aa1d532 Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.527056 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.550753 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.564576 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.584843 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.584884 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.584895 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.584916 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.584928 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:09Z","lastTransitionTime":"2025-12-13T03:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.690556 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b40427c-c501-4c74-a7e3-2e6f1343bc03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ww5fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.690638 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.691285 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:09 crc kubenswrapper[4766]: E1213 03:45:09.691590 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:45:09 crc kubenswrapper[4766]: E1213 03:45:09.691825 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.694074 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.694114 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.694125 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.694144 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.694156 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:09Z","lastTransitionTime":"2025-12-13T03:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.711691 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.740794 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b9a47ec54cae6bb8656e42d032c74a0fbc1ede2ebd302ab768bc374cddfe441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.757465 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb2779e9-fd47-4ea6-a75c-b0c24339b1c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T03:44:53Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 03:44:52.250326 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 03:44:52.250536 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 03:44:52.251699 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1102693067/tls.crt::/tmp/serving-cert-1102693067/tls.key\\\\\\\"\\\\nI1213 03:44:52.913338 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 03:44:52.918333 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 03:44:52.918373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 03:44:52.918488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 03:44:52.918505 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 03:44:52.925473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 03:44:52.925500 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925505 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 03:44:52.925512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 03:44:52.925515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 03:44:52.925518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 03:44:52.926233 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 03:44:52.929662 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.774300 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.790493 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.796887 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.796924 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.796934 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.796960 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.796971 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:09Z","lastTransitionTime":"2025-12-13T03:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.804786 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" event={"ID":"4b40427c-c501-4c74-a7e3-2e6f1343bc03","Type":"ContainerStarted","Data":"a557a225dccd0910d6eb21cf8f71883985172c2e9d6fd4714fcd30f47aa1d532"} Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.807997 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.814294 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" event={"ID":"11c14fd8-7cc0-4f63-8900-c0ae7306d019","Type":"ContainerStarted","Data":"9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136"} Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.814449 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.829565 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.850580 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.863352 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.879033 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.909598 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.910694 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.910732 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.910743 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.910760 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.910771 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:09Z","lastTransitionTime":"2025-12-13T03:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.922409 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.935713 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b40427c-c501-4c74-a7e3-2e6f1343bc03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ww5fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.950656 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.969376 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.986886 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:09 crc kubenswrapper[4766]: I1213 03:45:09.999836 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:09Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.013102 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.014414 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.014605 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.014711 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.014796 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.014872 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:10Z","lastTransitionTime":"2025-12-13T03:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.043401 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.098242 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b9a47ec54cae6bb8656e42d032c74a0fbc1ede2ebd302ab768bc374cddfe441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.121099 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.121137 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.121146 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.121163 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.121174 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:10Z","lastTransitionTime":"2025-12-13T03:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.165327 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb2779e9-fd47-4ea6-a75c-b0c24339b1c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T03:44:53Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 03:44:52.250326 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 03:44:52.250536 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 03:44:52.251699 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1102693067/tls.crt::/tmp/serving-cert-1102693067/tls.key\\\\\\\"\\\\nI1213 03:44:52.913338 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 03:44:52.918333 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 03:44:52.918373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 03:44:52.918488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 03:44:52.918505 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 03:44:52.925473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 03:44:52.925500 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925505 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 03:44:52.925512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 03:44:52.925515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 03:44:52.925518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 03:44:52.926233 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 03:44:52.929662 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.182675 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.203520 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.218756 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.223823 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.223872 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.223883 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.223902 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.223913 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:10Z","lastTransitionTime":"2025-12-13T03:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.239470 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.254652 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b40427c-c501-4c74-a7e3-2e6f1343bc03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ww5fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.273368 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.287527 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.303848 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.317648 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.325874 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.325917 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.325928 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.325943 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.325954 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:10Z","lastTransitionTime":"2025-12-13T03:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.331457 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.346149 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.370984 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b9a47ec54cae6bb8656e42d032c74a0fbc1ede2ebd302ab768bc374cddfe441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.386716 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb2779e9-fd47-4ea6-a75c-b0c24339b1c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T03:44:53Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 03:44:52.250326 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 03:44:52.250536 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 03:44:52.251699 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1102693067/tls.crt::/tmp/serving-cert-1102693067/tls.key\\\\\\\"\\\\nI1213 03:44:52.913338 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 03:44:52.918333 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 03:44:52.918373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 03:44:52.918488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 03:44:52.918505 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 03:44:52.925473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 03:44:52.925500 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925505 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 03:44:52.925512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 03:44:52.925515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 03:44:52.925518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 03:44:52.926233 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 03:44:52.929662 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.400056 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.419303 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.429312 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.429369 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.429382 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.429401 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.429416 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:10Z","lastTransitionTime":"2025-12-13T03:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.434727 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.451394 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.466057 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.496640 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.523238 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-qvxrm"] Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.523896 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:10 crc kubenswrapper[4766]: E1213 03:45:10.523977 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.531755 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.531805 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.531817 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.531837 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.531892 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:10Z","lastTransitionTime":"2025-12-13T03:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.540835 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.556126 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.572000 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.584417 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.598221 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b40427c-c501-4c74-a7e3-2e6f1343bc03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ww5fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.615336 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.615293 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb2779e9-fd47-4ea6-a75c-b0c24339b1c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T03:44:53Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 03:44:52.250326 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 03:44:52.250536 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 03:44:52.251699 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1102693067/tls.crt::/tmp/serving-cert-1102693067/tls.key\\\\\\\"\\\\nI1213 03:44:52.913338 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 03:44:52.918333 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 03:44:52.918373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 03:44:52.918488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 03:44:52.918505 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 03:44:52.925473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 03:44:52.925500 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925505 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 03:44:52.925512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 03:44:52.925515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 03:44:52.925518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 03:44:52.926233 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 03:44:52.929662 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: E1213 03:45:10.615584 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.618753 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84c9636d-a525-40e8-bc35-af07ecbdeafc-metrics-certs\") pod \"network-metrics-daemon-qvxrm\" (UID: \"84c9636d-a525-40e8-bc35-af07ecbdeafc\") " pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.618862 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5n59\" (UniqueName: \"kubernetes.io/projected/84c9636d-a525-40e8-bc35-af07ecbdeafc-kube-api-access-d5n59\") pod \"network-metrics-daemon-qvxrm\" (UID: \"84c9636d-a525-40e8-bc35-af07ecbdeafc\") " pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.629555 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.634297 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.634335 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.634347 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.634362 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.634374 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:10Z","lastTransitionTime":"2025-12-13T03:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.647541 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.662417 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.681826 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.698558 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.719722 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b9a47ec54cae6bb8656e42d032c74a0fbc1ede2ebd302ab768bc374cddfe441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.720163 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5n59\" (UniqueName: \"kubernetes.io/projected/84c9636d-a525-40e8-bc35-af07ecbdeafc-kube-api-access-d5n59\") pod \"network-metrics-daemon-qvxrm\" (UID: \"84c9636d-a525-40e8-bc35-af07ecbdeafc\") " pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.720202 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84c9636d-a525-40e8-bc35-af07ecbdeafc-metrics-certs\") pod \"network-metrics-daemon-qvxrm\" (UID: \"84c9636d-a525-40e8-bc35-af07ecbdeafc\") " pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:10 crc kubenswrapper[4766]: E1213 03:45:10.720354 4766 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 03:45:10 crc kubenswrapper[4766]: E1213 03:45:10.720421 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84c9636d-a525-40e8-bc35-af07ecbdeafc-metrics-certs podName:84c9636d-a525-40e8-bc35-af07ecbdeafc nodeName:}" failed. No retries permitted until 2025-12-13 03:45:11.220402838 +0000 UTC m=+42.730335802 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84c9636d-a525-40e8-bc35-af07ecbdeafc-metrics-certs") pod "network-metrics-daemon-qvxrm" (UID: "84c9636d-a525-40e8-bc35-af07ecbdeafc") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.737870 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.737926 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.737938 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.737959 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.737973 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:10Z","lastTransitionTime":"2025-12-13T03:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.738337 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5n59\" (UniqueName: \"kubernetes.io/projected/84c9636d-a525-40e8-bc35-af07ecbdeafc-kube-api-access-d5n59\") pod \"network-metrics-daemon-qvxrm\" (UID: \"84c9636d-a525-40e8-bc35-af07ecbdeafc\") " pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.739292 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.754005 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvxrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84c9636d-a525-40e8-bc35-af07ecbdeafc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvxrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.768642 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.781732 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:10Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.820972 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" event={"ID":"4b40427c-c501-4c74-a7e3-2e6f1343bc03","Type":"ContainerStarted","Data":"455382a79dfb5c2f3d7ad8fef604af58cc2b4ff33c3c31ffe0961205c4368b23"} Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.821376 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" event={"ID":"4b40427c-c501-4c74-a7e3-2e6f1343bc03","Type":"ContainerStarted","Data":"a79778b7b5de56ba93a75c81deac1a836df2ffb1b94c804d0b630800663dd443"} Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.821088 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.841671 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.841732 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.841746 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.841767 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.841779 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:10Z","lastTransitionTime":"2025-12-13T03:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.944504 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.944575 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.944606 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.944630 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:10 crc kubenswrapper[4766]: I1213 03:45:10.944647 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:10Z","lastTransitionTime":"2025-12-13T03:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.047037 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.047075 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.047085 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.047102 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.047114 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:11Z","lastTransitionTime":"2025-12-13T03:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.149775 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.149823 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.149837 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.149856 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.149866 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:11Z","lastTransitionTime":"2025-12-13T03:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.226693 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84c9636d-a525-40e8-bc35-af07ecbdeafc-metrics-certs\") pod \"network-metrics-daemon-qvxrm\" (UID: \"84c9636d-a525-40e8-bc35-af07ecbdeafc\") " pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:11 crc kubenswrapper[4766]: E1213 03:45:11.226883 4766 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 03:45:11 crc kubenswrapper[4766]: E1213 03:45:11.226957 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84c9636d-a525-40e8-bc35-af07ecbdeafc-metrics-certs podName:84c9636d-a525-40e8-bc35-af07ecbdeafc nodeName:}" failed. No retries permitted until 2025-12-13 03:45:12.226937051 +0000 UTC m=+43.736870015 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84c9636d-a525-40e8-bc35-af07ecbdeafc-metrics-certs") pod "network-metrics-daemon-qvxrm" (UID: "84c9636d-a525-40e8-bc35-af07ecbdeafc") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.253009 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.253337 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.253459 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.253572 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.253679 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:11Z","lastTransitionTime":"2025-12-13T03:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.356739 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.356796 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.356808 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.356825 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.356838 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:11Z","lastTransitionTime":"2025-12-13T03:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.461387 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.461451 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.461464 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.461486 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.461499 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:11Z","lastTransitionTime":"2025-12-13T03:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.565273 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.565336 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.565352 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.565375 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.565387 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:11Z","lastTransitionTime":"2025-12-13T03:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.615505 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.615505 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:11 crc kubenswrapper[4766]: E1213 03:45:11.616796 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:45:11 crc kubenswrapper[4766]: E1213 03:45:11.616854 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.668058 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.668110 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.668123 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.668144 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.668175 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:11Z","lastTransitionTime":"2025-12-13T03:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.771284 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.771663 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.771674 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.771693 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.771704 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:11Z","lastTransitionTime":"2025-12-13T03:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.847897 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:11Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.867316 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:11Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.874727 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.874777 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.874788 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.874807 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.874826 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:11Z","lastTransitionTime":"2025-12-13T03:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.883238 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:11Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.901324 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:11Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.917846 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:11Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.950943 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b9a47ec54cae6bb8656e42d032c74a0fbc1ede2ebd302ab768bc374cddfe441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:11Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.990062 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.990124 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.990136 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.990154 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:11 crc kubenswrapper[4766]: I1213 03:45:11.990166 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:11Z","lastTransitionTime":"2025-12-13T03:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.089540 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb2779e9-fd47-4ea6-a75c-b0c24339b1c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T03:44:53Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 03:44:52.250326 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 03:44:52.250536 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 03:44:52.251699 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1102693067/tls.crt::/tmp/serving-cert-1102693067/tls.key\\\\\\\"\\\\nI1213 03:44:52.913338 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 03:44:52.918333 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 03:44:52.918373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 03:44:52.918488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 03:44:52.918505 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 03:44:52.925473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 03:44:52.925500 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925505 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 03:44:52.925512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 03:44:52.925515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 03:44:52.925518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 03:44:52.926233 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 03:44:52.929662 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:12Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.093333 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.093382 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.093394 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.093411 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.093423 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:12Z","lastTransitionTime":"2025-12-13T03:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.104314 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvxrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84c9636d-a525-40e8-bc35-af07ecbdeafc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvxrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:12Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.123681 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:12Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.140386 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:12Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.156153 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:12Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.174868 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:12Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.192553 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:12Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.198767 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.198839 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.198853 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.198880 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.198897 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:12Z","lastTransitionTime":"2025-12-13T03:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.212385 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:12Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.258991 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:12Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.282486 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b40427c-c501-4c74-a7e3-2e6f1343bc03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a79778b7b5de56ba93a75c81deac1a836df2ffb1b94c804d0b630800663dd443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://455382a79dfb5c2f3d7ad8fef604af58cc2b4ff33c3c31ffe0961205c4368b23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ww5fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:12Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.285106 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84c9636d-a525-40e8-bc35-af07ecbdeafc-metrics-certs\") pod \"network-metrics-daemon-qvxrm\" (UID: \"84c9636d-a525-40e8-bc35-af07ecbdeafc\") " pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:12 crc kubenswrapper[4766]: E1213 03:45:12.285409 4766 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 03:45:12 crc kubenswrapper[4766]: E1213 03:45:12.285655 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84c9636d-a525-40e8-bc35-af07ecbdeafc-metrics-certs podName:84c9636d-a525-40e8-bc35-af07ecbdeafc nodeName:}" failed. No retries permitted until 2025-12-13 03:45:14.285611885 +0000 UTC m=+45.795544849 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84c9636d-a525-40e8-bc35-af07ecbdeafc-metrics-certs") pod "network-metrics-daemon-qvxrm" (UID: "84c9636d-a525-40e8-bc35-af07ecbdeafc") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.318548 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.318762 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.318827 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.318936 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.319002 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:12Z","lastTransitionTime":"2025-12-13T03:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.421537 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.421578 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.421589 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.421612 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.421624 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:12Z","lastTransitionTime":"2025-12-13T03:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.524078 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.524119 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.524150 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.524167 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.524184 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:12Z","lastTransitionTime":"2025-12-13T03:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.620904 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:12 crc kubenswrapper[4766]: E1213 03:45:12.621125 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.621221 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:12 crc kubenswrapper[4766]: E1213 03:45:12.621274 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.626499 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.626527 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.626536 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.626551 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.626561 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:12Z","lastTransitionTime":"2025-12-13T03:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.729827 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.729881 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.729894 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.729916 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.729930 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:12Z","lastTransitionTime":"2025-12-13T03:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.832715 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.832765 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.832778 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.832796 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.832812 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:12Z","lastTransitionTime":"2025-12-13T03:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.935136 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.935223 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.935237 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.935261 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:12 crc kubenswrapper[4766]: I1213 03:45:12.935277 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:12Z","lastTransitionTime":"2025-12-13T03:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.037824 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.037872 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.037886 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.037904 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.037915 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:13Z","lastTransitionTime":"2025-12-13T03:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.140791 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.140839 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.140850 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.140869 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.140883 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:13Z","lastTransitionTime":"2025-12-13T03:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.243817 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.243885 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.243897 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.243919 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.244827 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:13Z","lastTransitionTime":"2025-12-13T03:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.348071 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.348117 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.348129 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.348148 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.348161 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:13Z","lastTransitionTime":"2025-12-13T03:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.451776 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.451828 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.451844 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.451870 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.451888 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:13Z","lastTransitionTime":"2025-12-13T03:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.510705 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.510907 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:13 crc kubenswrapper[4766]: E1213 03:45:13.510928 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:45:29.510893415 +0000 UTC m=+61.020826419 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.510992 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.511050 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.511114 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:13 crc kubenswrapper[4766]: E1213 03:45:13.511283 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 03:45:13 crc kubenswrapper[4766]: E1213 03:45:13.511376 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 03:45:13 crc kubenswrapper[4766]: E1213 03:45:13.511337 4766 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 03:45:13 crc kubenswrapper[4766]: E1213 03:45:13.511406 4766 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 03:45:13 crc kubenswrapper[4766]: E1213 03:45:13.511523 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 03:45:29.511496532 +0000 UTC m=+61.021429536 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 03:45:13 crc kubenswrapper[4766]: E1213 03:45:13.511728 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-13 03:45:29.511695308 +0000 UTC m=+61.021628312 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 03:45:13 crc kubenswrapper[4766]: E1213 03:45:13.511527 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 03:45:13 crc kubenswrapper[4766]: E1213 03:45:13.511787 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 03:45:13 crc kubenswrapper[4766]: E1213 03:45:13.511816 4766 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 03:45:13 crc kubenswrapper[4766]: E1213 03:45:13.511337 4766 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 03:45:13 crc kubenswrapper[4766]: E1213 03:45:13.511886 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-13 03:45:29.511865783 +0000 UTC m=+61.021798787 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 03:45:13 crc kubenswrapper[4766]: E1213 03:45:13.511945 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 03:45:29.511920815 +0000 UTC m=+61.021853819 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.554440 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.554496 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.554509 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.554531 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.554545 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:13Z","lastTransitionTime":"2025-12-13T03:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.616210 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.616340 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:13 crc kubenswrapper[4766]: E1213 03:45:13.616383 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:45:13 crc kubenswrapper[4766]: E1213 03:45:13.616601 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.657985 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.658060 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.658075 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.658102 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.658113 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:13Z","lastTransitionTime":"2025-12-13T03:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.678989 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.679045 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.679054 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.679071 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.679082 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:13Z","lastTransitionTime":"2025-12-13T03:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:13 crc kubenswrapper[4766]: E1213 03:45:13.696535 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:13Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.701183 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.701263 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.701278 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.701298 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.701311 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:13Z","lastTransitionTime":"2025-12-13T03:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:13 crc kubenswrapper[4766]: E1213 03:45:13.713615 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:13Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.718361 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.718409 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.718419 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.718452 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.718464 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:13Z","lastTransitionTime":"2025-12-13T03:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:13 crc kubenswrapper[4766]: E1213 03:45:13.732083 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:13Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.736346 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.736398 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.736412 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.736450 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.736463 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:13Z","lastTransitionTime":"2025-12-13T03:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:13 crc kubenswrapper[4766]: E1213 03:45:13.758463 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:13Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.763129 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.763184 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.763194 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.763216 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.763227 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:13Z","lastTransitionTime":"2025-12-13T03:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:13 crc kubenswrapper[4766]: E1213 03:45:13.781508 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:13Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:13 crc kubenswrapper[4766]: E1213 03:45:13.781818 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.784693 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.784770 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.784797 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.784836 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.784868 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:13Z","lastTransitionTime":"2025-12-13T03:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.891299 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.891354 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.891364 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.891384 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.891396 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:13Z","lastTransitionTime":"2025-12-13T03:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.994474 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.994531 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.994544 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.994582 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:13 crc kubenswrapper[4766]: I1213 03:45:13.994595 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:13Z","lastTransitionTime":"2025-12-13T03:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.097905 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.098704 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.098951 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.099172 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.099272 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:14Z","lastTransitionTime":"2025-12-13T03:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.202861 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.202904 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.202913 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.202930 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.202940 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:14Z","lastTransitionTime":"2025-12-13T03:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.308156 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.308194 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.308203 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.308219 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.308228 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:14Z","lastTransitionTime":"2025-12-13T03:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.321296 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84c9636d-a525-40e8-bc35-af07ecbdeafc-metrics-certs\") pod \"network-metrics-daemon-qvxrm\" (UID: \"84c9636d-a525-40e8-bc35-af07ecbdeafc\") " pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:14 crc kubenswrapper[4766]: E1213 03:45:14.321508 4766 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 03:45:14 crc kubenswrapper[4766]: E1213 03:45:14.321609 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84c9636d-a525-40e8-bc35-af07ecbdeafc-metrics-certs podName:84c9636d-a525-40e8-bc35-af07ecbdeafc nodeName:}" failed. No retries permitted until 2025-12-13 03:45:18.321592496 +0000 UTC m=+49.831525450 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84c9636d-a525-40e8-bc35-af07ecbdeafc-metrics-certs") pod "network-metrics-daemon-qvxrm" (UID: "84c9636d-a525-40e8-bc35-af07ecbdeafc") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.411508 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.411854 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.411939 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.412032 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.412178 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:14Z","lastTransitionTime":"2025-12-13T03:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.524940 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.525315 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.525410 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.525525 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.525617 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:14Z","lastTransitionTime":"2025-12-13T03:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.616157 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.616157 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:14 crc kubenswrapper[4766]: E1213 03:45:14.616416 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:45:14 crc kubenswrapper[4766]: E1213 03:45:14.616526 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.628767 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.628809 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.628820 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.628839 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.628882 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:14Z","lastTransitionTime":"2025-12-13T03:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.642184 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.642361 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.662445 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="ovnkube-controller" probeResult="failure" output="" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.730364 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="ovnkube-controller" probeResult="failure" output="" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.731203 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.731253 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.731265 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.731284 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.731295 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:14Z","lastTransitionTime":"2025-12-13T03:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.833770 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.834251 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.834358 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.834475 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.834600 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:14Z","lastTransitionTime":"2025-12-13T03:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.839196 4766 generic.go:334] "Generic (PLEG): container finished" podID="11c14fd8-7cc0-4f63-8900-c0ae7306d019" containerID="9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136" exitCode=0 Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.839444 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" event={"ID":"11c14fd8-7cc0-4f63-8900-c0ae7306d019","Type":"ContainerDied","Data":"9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136"} Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.865719 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:14Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.882147 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:14Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.895728 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:14Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.911239 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b40427c-c501-4c74-a7e3-2e6f1343bc03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a79778b7b5de56ba93a75c81deac1a836df2ffb1b94c804d0b630800663dd443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://455382a79dfb5c2f3d7ad8fef604af58cc2b4ff33c3c31ffe0961205c4368b23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ww5fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:14Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.927536 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:14Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.937771 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.937819 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.937833 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.937854 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.937867 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:14Z","lastTransitionTime":"2025-12-13T03:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.942640 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:14Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.961729 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:14Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.978024 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:14Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:14 crc kubenswrapper[4766]: I1213 03:45:14.993734 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:14Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.011992 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:15Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.037569 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b9a47ec54cae6bb8656e42d032c74a0fbc1ede2ebd302ab768bc374cddfe441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:15Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.041460 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.041527 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.041537 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.041556 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.041566 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:15Z","lastTransitionTime":"2025-12-13T03:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.058839 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb2779e9-fd47-4ea6-a75c-b0c24339b1c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T03:44:53Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 03:44:52.250326 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 03:44:52.250536 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 03:44:52.251699 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1102693067/tls.crt::/tmp/serving-cert-1102693067/tls.key\\\\\\\"\\\\nI1213 03:44:52.913338 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 03:44:52.918333 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 03:44:52.918373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 03:44:52.918488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 03:44:52.918505 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 03:44:52.925473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 03:44:52.925500 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925505 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 03:44:52.925512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 03:44:52.925515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 03:44:52.925518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 03:44:52.926233 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 03:44:52.929662 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:15Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.072725 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:15Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.144308 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.144346 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.144355 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.144374 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.144384 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:15Z","lastTransitionTime":"2025-12-13T03:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.209134 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:15Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.229113 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:15Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.249895 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvxrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84c9636d-a525-40e8-bc35-af07ecbdeafc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvxrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:15Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.251576 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.251627 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.251641 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.251661 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.251674 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:15Z","lastTransitionTime":"2025-12-13T03:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.354390 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.354522 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.354538 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.354574 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.354591 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:15Z","lastTransitionTime":"2025-12-13T03:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.458020 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.458093 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.458107 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.458132 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.458149 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:15Z","lastTransitionTime":"2025-12-13T03:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.584806 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.584850 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.584862 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.584880 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.584890 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:15Z","lastTransitionTime":"2025-12-13T03:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.615495 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.615612 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:15 crc kubenswrapper[4766]: E1213 03:45:15.615800 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:45:15 crc kubenswrapper[4766]: E1213 03:45:15.615912 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.688605 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.688659 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.688669 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.688688 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.688702 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:15Z","lastTransitionTime":"2025-12-13T03:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.791510 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.791561 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.791572 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.791590 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.791601 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:15Z","lastTransitionTime":"2025-12-13T03:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.858378 4766 generic.go:334] "Generic (PLEG): container finished" podID="11c14fd8-7cc0-4f63-8900-c0ae7306d019" containerID="1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21" exitCode=0 Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.858465 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" event={"ID":"11c14fd8-7cc0-4f63-8900-c0ae7306d019","Type":"ContainerDied","Data":"1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21"} Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.894605 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.894669 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.894725 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.894757 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.894784 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:15Z","lastTransitionTime":"2025-12-13T03:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.920696 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:15Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.946682 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:15Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.998308 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.998372 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.998394 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.998442 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:15 crc kubenswrapper[4766]: I1213 03:45:15.998463 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:15Z","lastTransitionTime":"2025-12-13T03:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.003926 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:16Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.020308 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:16Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.036036 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:16Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.054209 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:16Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.068685 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b40427c-c501-4c74-a7e3-2e6f1343bc03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a79778b7b5de56ba93a75c81deac1a836df2ffb1b94c804d0b630800663dd443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://455382a79dfb5c2f3d7ad8fef604af58cc2b4ff33c3c31ffe0961205c4368b23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ww5fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:16Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.088631 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b9a47ec54cae6bb8656e42d032c74a0fbc1ede2ebd302ab768bc374cddfe441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:16Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.102865 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.102923 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.102978 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.103000 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.103015 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:16Z","lastTransitionTime":"2025-12-13T03:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.106692 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb2779e9-fd47-4ea6-a75c-b0c24339b1c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T03:44:53Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 03:44:52.250326 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 03:44:52.250536 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 03:44:52.251699 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1102693067/tls.crt::/tmp/serving-cert-1102693067/tls.key\\\\\\\"\\\\nI1213 03:44:52.913338 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 03:44:52.918333 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 03:44:52.918373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 03:44:52.918488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 03:44:52.918505 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 03:44:52.925473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 03:44:52.925500 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925505 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 03:44:52.925512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 03:44:52.925515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 03:44:52.925518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 03:44:52.926233 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 03:44:52.929662 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:16Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.123184 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:16Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.143893 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:16Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.160617 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:16Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.176596 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:16Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.200234 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:16Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.206981 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.207010 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.207018 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.207036 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.207046 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:16Z","lastTransitionTime":"2025-12-13T03:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.221193 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:16Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.235163 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvxrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84c9636d-a525-40e8-bc35-af07ecbdeafc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvxrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:16Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.311005 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.311055 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.311067 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.311095 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.311112 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:16Z","lastTransitionTime":"2025-12-13T03:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.414118 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.414172 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.414189 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.414221 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.414235 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:16Z","lastTransitionTime":"2025-12-13T03:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.517633 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.517685 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.517700 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.517723 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.517738 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:16Z","lastTransitionTime":"2025-12-13T03:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.616170 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:16 crc kubenswrapper[4766]: E1213 03:45:16.616366 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.616615 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:16 crc kubenswrapper[4766]: E1213 03:45:16.616672 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.620080 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.620110 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.620122 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.620137 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.620153 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:16Z","lastTransitionTime":"2025-12-13T03:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.722943 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.722992 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.723002 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.723018 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.723029 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:16Z","lastTransitionTime":"2025-12-13T03:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.826452 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.826505 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.826516 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.826536 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.826550 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:16Z","lastTransitionTime":"2025-12-13T03:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.864912 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2dfkj_c2621562-4c91-40a3-ad72-29d325404496/ovnkube-controller/0.log" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.867362 4766 generic.go:334] "Generic (PLEG): container finished" podID="c2621562-4c91-40a3-ad72-29d325404496" containerID="7b9a47ec54cae6bb8656e42d032c74a0fbc1ede2ebd302ab768bc374cddfe441" exitCode=1 Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.867463 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" event={"ID":"c2621562-4c91-40a3-ad72-29d325404496","Type":"ContainerDied","Data":"7b9a47ec54cae6bb8656e42d032c74a0fbc1ede2ebd302ab768bc374cddfe441"} Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.868300 4766 scope.go:117] "RemoveContainer" containerID="7b9a47ec54cae6bb8656e42d032c74a0fbc1ede2ebd302ab768bc374cddfe441" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.874354 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" event={"ID":"11c14fd8-7cc0-4f63-8900-c0ae7306d019","Type":"ContainerStarted","Data":"7587993e5f7e3b62e3eb1456ca64e857ba0e4daf15fdea9c546c59f50f853517"} Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.904318 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:16Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.919911 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:16Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.929637 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.929680 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.929690 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.929708 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.929724 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:16Z","lastTransitionTime":"2025-12-13T03:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.935066 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:16Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.950361 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:16Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.964715 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:16Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.978367 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:16Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:16 crc kubenswrapper[4766]: I1213 03:45:16.993558 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b40427c-c501-4c74-a7e3-2e6f1343bc03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a79778b7b5de56ba93a75c81deac1a836df2ffb1b94c804d0b630800663dd443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://455382a79dfb5c2f3d7ad8fef604af58cc2b4ff33c3c31ffe0961205c4368b23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ww5fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:16Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.009056 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:17Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.027705 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:17Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.033395 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.033468 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.033480 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.033501 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.033515 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:17Z","lastTransitionTime":"2025-12-13T03:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.042934 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:17Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.058486 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:17Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.073607 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:17Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.096838 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b9a47ec54cae6bb8656e42d032c74a0fbc1ede2ebd302ab768bc374cddfe441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9a47ec54cae6bb8656e42d032c74a0fbc1ede2ebd302ab768bc374cddfe441\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"message\\\":\\\"removal\\\\nI1213 03:45:16.356939 5954 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1213 03:45:16.356956 5954 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1213 03:45:16.356969 5954 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1213 03:45:16.356983 5954 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1213 03:45:16.357051 5954 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1213 03:45:16.357059 5954 handler.go:208] Removed *v1.Node event handler 7\\\\nI1213 03:45:16.357078 5954 handler.go:208] Removed *v1.Node event handler 2\\\\nI1213 03:45:16.357093 5954 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1213 03:45:16.357104 5954 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1213 03:45:16.357122 5954 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1213 03:45:16.357134 5954 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1213 03:45:16.357151 5954 factory.go:656] Stopping watch factory\\\\nI1213 03:45:16.357174 5954 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1213 03:45:16.357188 5954 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1213 03:45:16.357195 5954 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:17Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.113714 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb2779e9-fd47-4ea6-a75c-b0c24339b1c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T03:44:53Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 03:44:52.250326 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 03:44:52.250536 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 03:44:52.251699 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1102693067/tls.crt::/tmp/serving-cert-1102693067/tls.key\\\\\\\"\\\\nI1213 03:44:52.913338 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 03:44:52.918333 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 03:44:52.918373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 03:44:52.918488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 03:44:52.918505 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 03:44:52.925473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 03:44:52.925500 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925505 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 03:44:52.925512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 03:44:52.925515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 03:44:52.925518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 03:44:52.926233 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 03:44:52.929662 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:17Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.130530 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvxrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84c9636d-a525-40e8-bc35-af07ecbdeafc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvxrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:17Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.136602 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.136645 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.136658 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.136680 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.136693 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:17Z","lastTransitionTime":"2025-12-13T03:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.148516 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:17Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.164800 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:17Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.178108 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:17Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.193143 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:17Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.208491 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:17Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.225981 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:17Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.239012 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.239050 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.239063 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.239081 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.239096 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:17Z","lastTransitionTime":"2025-12-13T03:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.239515 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:17Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.257600 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b40427c-c501-4c74-a7e3-2e6f1343bc03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a79778b7b5de56ba93a75c81deac1a836df2ffb1b94c804d0b630800663dd443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://455382a79dfb5c2f3d7ad8fef604af58cc2b4ff33c3c31ffe0961205c4368b23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ww5fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:17Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.271149 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:17Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.296235 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b9a47ec54cae6bb8656e42d032c74a0fbc1ede2ebd302ab768bc374cddfe441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9a47ec54cae6bb8656e42d032c74a0fbc1ede2ebd302ab768bc374cddfe441\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"message\\\":\\\"removal\\\\nI1213 03:45:16.356939 5954 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1213 03:45:16.356956 5954 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1213 03:45:16.356969 5954 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1213 03:45:16.356983 5954 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1213 03:45:16.357051 5954 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1213 03:45:16.357059 5954 handler.go:208] Removed *v1.Node event handler 7\\\\nI1213 03:45:16.357078 5954 handler.go:208] Removed *v1.Node event handler 2\\\\nI1213 03:45:16.357093 5954 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1213 03:45:16.357104 5954 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1213 03:45:16.357122 5954 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1213 03:45:16.357134 5954 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1213 03:45:16.357151 5954 factory.go:656] Stopping watch factory\\\\nI1213 03:45:16.357174 5954 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1213 03:45:16.357188 5954 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1213 03:45:16.357195 5954 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:17Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.323514 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb2779e9-fd47-4ea6-a75c-b0c24339b1c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T03:44:53Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 03:44:52.250326 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 03:44:52.250536 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 03:44:52.251699 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1102693067/tls.crt::/tmp/serving-cert-1102693067/tls.key\\\\\\\"\\\\nI1213 03:44:52.913338 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 03:44:52.918333 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 03:44:52.918373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 03:44:52.918488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 03:44:52.918505 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 03:44:52.925473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 03:44:52.925500 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925505 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 03:44:52.925512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 03:44:52.925515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 03:44:52.925518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 03:44:52.926233 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 03:44:52.929662 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:17Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.341842 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.341909 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.341945 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.341966 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.342086 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:17Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.341978 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:17Z","lastTransitionTime":"2025-12-13T03:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.356515 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:17Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.369856 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:17Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.384149 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:17Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.398571 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7587993e5f7e3b62e3eb1456ca64e857ba0e4daf15fdea9c546c59f50f853517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:17Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.409602 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvxrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84c9636d-a525-40e8-bc35-af07ecbdeafc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvxrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:17Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.445163 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.445207 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.445218 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.445239 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.445254 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:17Z","lastTransitionTime":"2025-12-13T03:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.548103 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.548149 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.548163 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.548183 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.548196 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:17Z","lastTransitionTime":"2025-12-13T03:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.615857 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:17 crc kubenswrapper[4766]: E1213 03:45:17.616030 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.616046 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.616610 4766 scope.go:117] "RemoveContainer" containerID="1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3" Dec 13 03:45:17 crc kubenswrapper[4766]: E1213 03:45:17.616601 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.679290 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.679357 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.679373 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.679411 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.679449 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:17Z","lastTransitionTime":"2025-12-13T03:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.783185 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.783227 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.783240 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.783259 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.783271 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:17Z","lastTransitionTime":"2025-12-13T03:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.879339 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2dfkj_c2621562-4c91-40a3-ad72-29d325404496/ovnkube-controller/0.log" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.882050 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" event={"ID":"c2621562-4c91-40a3-ad72-29d325404496","Type":"ContainerStarted","Data":"2fb0d2712c18bfa5fe0348e7f0506275e7a452b99e3efe9fd16632831452ceed"} Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.882507 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.885308 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.885333 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.885342 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.885365 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.885375 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:17Z","lastTransitionTime":"2025-12-13T03:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.897879 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:17Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.918915 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fb0d2712c18bfa5fe0348e7f0506275e7a452b99e3efe9fd16632831452ceed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9a47ec54cae6bb8656e42d032c74a0fbc1ede2ebd302ab768bc374cddfe441\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"message\\\":\\\"removal\\\\nI1213 03:45:16.356939 5954 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1213 03:45:16.356956 5954 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1213 03:45:16.356969 5954 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1213 03:45:16.356983 5954 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1213 03:45:16.357051 5954 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1213 03:45:16.357059 5954 handler.go:208] Removed *v1.Node event handler 7\\\\nI1213 03:45:16.357078 5954 handler.go:208] Removed *v1.Node event handler 2\\\\nI1213 03:45:16.357093 5954 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1213 03:45:16.357104 5954 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1213 03:45:16.357122 5954 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1213 03:45:16.357134 5954 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1213 03:45:16.357151 5954 factory.go:656] Stopping watch factory\\\\nI1213 03:45:16.357174 5954 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1213 03:45:16.357188 5954 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1213 03:45:16.357195 5954 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:17Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.933195 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb2779e9-fd47-4ea6-a75c-b0c24339b1c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T03:44:53Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 03:44:52.250326 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 03:44:52.250536 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 03:44:52.251699 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1102693067/tls.crt::/tmp/serving-cert-1102693067/tls.key\\\\\\\"\\\\nI1213 03:44:52.913338 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 03:44:52.918333 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 03:44:52.918373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 03:44:52.918488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 03:44:52.918505 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 03:44:52.925473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 03:44:52.925500 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925505 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 03:44:52.925512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 03:44:52.925515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 03:44:52.925518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 03:44:52.926233 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 03:44:52.929662 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:17Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.947380 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:17Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.962260 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:17Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.975171 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:17Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.987614 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.987649 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.987661 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.987683 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.987695 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:17Z","lastTransitionTime":"2025-12-13T03:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:17 crc kubenswrapper[4766]: I1213 03:45:17.989578 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:17Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.003772 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7587993e5f7e3b62e3eb1456ca64e857ba0e4daf15fdea9c546c59f50f853517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:18Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.014713 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvxrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84c9636d-a525-40e8-bc35-af07ecbdeafc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvxrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:18Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.024996 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:18Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.036131 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:18Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.047819 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:18Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.062966 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:18Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.084628 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:18Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.094350 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.094387 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.094397 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.094414 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.094451 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:18Z","lastTransitionTime":"2025-12-13T03:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.108349 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:18Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.126655 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b40427c-c501-4c74-a7e3-2e6f1343bc03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a79778b7b5de56ba93a75c81deac1a836df2ffb1b94c804d0b630800663dd443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://455382a79dfb5c2f3d7ad8fef604af58cc2b4ff33c3c31ffe0961205c4368b23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ww5fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:18Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.197949 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.198261 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.198274 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.198292 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.198305 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:18Z","lastTransitionTime":"2025-12-13T03:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.301947 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.302024 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.302047 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.302083 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.302109 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:18Z","lastTransitionTime":"2025-12-13T03:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.388800 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84c9636d-a525-40e8-bc35-af07ecbdeafc-metrics-certs\") pod \"network-metrics-daemon-qvxrm\" (UID: \"84c9636d-a525-40e8-bc35-af07ecbdeafc\") " pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:18 crc kubenswrapper[4766]: E1213 03:45:18.389067 4766 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 03:45:18 crc kubenswrapper[4766]: E1213 03:45:18.389225 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84c9636d-a525-40e8-bc35-af07ecbdeafc-metrics-certs podName:84c9636d-a525-40e8-bc35-af07ecbdeafc nodeName:}" failed. No retries permitted until 2025-12-13 03:45:26.389172661 +0000 UTC m=+57.899105695 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84c9636d-a525-40e8-bc35-af07ecbdeafc-metrics-certs") pod "network-metrics-daemon-qvxrm" (UID: "84c9636d-a525-40e8-bc35-af07ecbdeafc") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.404881 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.404928 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.404940 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.404955 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.404965 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:18Z","lastTransitionTime":"2025-12-13T03:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.507964 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.508021 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.508032 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.508050 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.508062 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:18Z","lastTransitionTime":"2025-12-13T03:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.614909 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.614974 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.614987 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.615008 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.615020 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:18Z","lastTransitionTime":"2025-12-13T03:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.615533 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.615533 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:18 crc kubenswrapper[4766]: E1213 03:45:18.615696 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:45:18 crc kubenswrapper[4766]: E1213 03:45:18.615775 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.725702 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.725764 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.725788 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.725812 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.725824 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:18Z","lastTransitionTime":"2025-12-13T03:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.830508 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.830564 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.830576 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.830594 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.830605 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:18Z","lastTransitionTime":"2025-12-13T03:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.887757 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.889893 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3c82736d8d5e400aa9f77b6e3481b75b3127380fbfe2ab6b998360b771a41dc8"} Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.890390 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.891823 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2dfkj_c2621562-4c91-40a3-ad72-29d325404496/ovnkube-controller/1.log" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.892500 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2dfkj_c2621562-4c91-40a3-ad72-29d325404496/ovnkube-controller/0.log" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.895848 4766 generic.go:334] "Generic (PLEG): container finished" podID="c2621562-4c91-40a3-ad72-29d325404496" containerID="2fb0d2712c18bfa5fe0348e7f0506275e7a452b99e3efe9fd16632831452ceed" exitCode=1 Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.895906 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" event={"ID":"c2621562-4c91-40a3-ad72-29d325404496","Type":"ContainerDied","Data":"2fb0d2712c18bfa5fe0348e7f0506275e7a452b99e3efe9fd16632831452ceed"} Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.895975 4766 scope.go:117] "RemoveContainer" containerID="7b9a47ec54cae6bb8656e42d032c74a0fbc1ede2ebd302ab768bc374cddfe441" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.901696 4766 scope.go:117] "RemoveContainer" containerID="2fb0d2712c18bfa5fe0348e7f0506275e7a452b99e3efe9fd16632831452ceed" Dec 13 03:45:18 crc kubenswrapper[4766]: E1213 03:45:18.902099 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2dfkj_openshift-ovn-kubernetes(c2621562-4c91-40a3-ad72-29d325404496)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" podUID="c2621562-4c91-40a3-ad72-29d325404496" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.923322 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:18Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.933229 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.933283 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.933293 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.933310 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.933321 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:18Z","lastTransitionTime":"2025-12-13T03:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.942130 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:18Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.957058 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:18Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.970538 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:18Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:18 crc kubenswrapper[4766]: I1213 03:45:18.988389 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b40427c-c501-4c74-a7e3-2e6f1343bc03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a79778b7b5de56ba93a75c81deac1a836df2ffb1b94c804d0b630800663dd443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://455382a79dfb5c2f3d7ad8fef604af58cc2b4ff33c3c31ffe0961205c4368b23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ww5fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:18Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.006074 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.023406 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fb0d2712c18bfa5fe0348e7f0506275e7a452b99e3efe9fd16632831452ceed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9a47ec54cae6bb8656e42d032c74a0fbc1ede2ebd302ab768bc374cddfe441\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"message\\\":\\\"removal\\\\nI1213 03:45:16.356939 5954 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1213 03:45:16.356956 5954 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1213 03:45:16.356969 5954 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1213 03:45:16.356983 5954 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1213 03:45:16.357051 5954 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1213 03:45:16.357059 5954 handler.go:208] Removed *v1.Node event handler 7\\\\nI1213 03:45:16.357078 5954 handler.go:208] Removed *v1.Node event handler 2\\\\nI1213 03:45:16.357093 5954 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1213 03:45:16.357104 5954 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1213 03:45:16.357122 5954 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1213 03:45:16.357134 5954 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1213 03:45:16.357151 5954 factory.go:656] Stopping watch factory\\\\nI1213 03:45:16.357174 5954 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1213 03:45:16.357188 5954 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1213 03:45:16.357195 5954 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.036016 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.036113 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.036126 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.036148 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.036161 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:19Z","lastTransitionTime":"2025-12-13T03:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.038958 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb2779e9-fd47-4ea6-a75c-b0c24339b1c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c82736d8d5e400aa9f77b6e3481b75b3127380fbfe2ab6b998360b771a41dc8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T03:44:53Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 03:44:52.250326 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 03:44:52.250536 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 03:44:52.251699 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1102693067/tls.crt::/tmp/serving-cert-1102693067/tls.key\\\\\\\"\\\\nI1213 03:44:52.913338 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 03:44:52.918333 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 03:44:52.918373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 03:44:52.918488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 03:44:52.918505 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 03:44:52.925473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 03:44:52.925500 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925505 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 03:44:52.925512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 03:44:52.925515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 03:44:52.925518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 03:44:52.926233 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 03:44:52.929662 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.053713 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.067793 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.081018 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.095731 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.112010 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7587993e5f7e3b62e3eb1456ca64e857ba0e4daf15fdea9c546c59f50f853517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.123471 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvxrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84c9636d-a525-40e8-bc35-af07ecbdeafc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvxrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.136807 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.138711 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.138767 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.138779 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.138802 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.138815 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:19Z","lastTransitionTime":"2025-12-13T03:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.149914 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.164151 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.177331 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.191319 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.206062 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.220308 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.232496 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.241473 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.241548 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.241605 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.241626 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.241637 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:19Z","lastTransitionTime":"2025-12-13T03:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.247696 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b40427c-c501-4c74-a7e3-2e6f1343bc03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a79778b7b5de56ba93a75c81deac1a836df2ffb1b94c804d0b630800663dd443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://455382a79dfb5c2f3d7ad8fef604af58cc2b4ff33c3c31ffe0961205c4368b23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ww5fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.262251 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.276609 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.290508 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.303901 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.318626 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.338522 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fb0d2712c18bfa5fe0348e7f0506275e7a452b99e3efe9fd16632831452ceed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9a47ec54cae6bb8656e42d032c74a0fbc1ede2ebd302ab768bc374cddfe441\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"message\\\":\\\"removal\\\\nI1213 03:45:16.356939 5954 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1213 03:45:16.356956 5954 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1213 03:45:16.356969 5954 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1213 03:45:16.356983 5954 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1213 03:45:16.357051 5954 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1213 03:45:16.357059 5954 handler.go:208] Removed *v1.Node event handler 7\\\\nI1213 03:45:16.357078 5954 handler.go:208] Removed *v1.Node event handler 2\\\\nI1213 03:45:16.357093 5954 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1213 03:45:16.357104 5954 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1213 03:45:16.357122 5954 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1213 03:45:16.357134 5954 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1213 03:45:16.357151 5954 factory.go:656] Stopping watch factory\\\\nI1213 03:45:16.357174 5954 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1213 03:45:16.357188 5954 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1213 03:45:16.357195 5954 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fb0d2712c18bfa5fe0348e7f0506275e7a452b99e3efe9fd16632831452ceed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-13T03:45:18Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:117\\\\nI1213 03:45:17.788044 6207 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1213 03:45:17.788725 6207 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1213 03:45:17.788754 6207 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1213 03:45:17.788763 6207 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1213 03:45:17.788775 6207 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1213 03:45:17.788780 6207 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1213 03:45:17.788816 6207 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1213 03:45:17.788856 6207 factory.go:656] Stopping watch factory\\\\nI1213 03:45:17.788874 6207 ovnkube.go:599] Stopped ovnkube\\\\nI1213 03:45:17.789027 6207 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1213 03:45:17.789038 6207 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1213 03:45:17.789044 6207 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1213 03:45:17.789050 6207 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1213 03:45:17.789056 6207 handler.go:208] Removed *v1.Node event handler 2\\\\nI1213 03:45:17.789063 6207 handler.go:208] Removed *v1.Node event handler 7\\\\nI1213 03:45:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.343806 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.343835 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.343844 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.343862 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.343877 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:19Z","lastTransitionTime":"2025-12-13T03:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.356204 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb2779e9-fd47-4ea6-a75c-b0c24339b1c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c82736d8d5e400aa9f77b6e3481b75b3127380fbfe2ab6b998360b771a41dc8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T03:44:53Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 03:44:52.250326 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 03:44:52.250536 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 03:44:52.251699 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1102693067/tls.crt::/tmp/serving-cert-1102693067/tls.key\\\\\\\"\\\\nI1213 03:44:52.913338 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 03:44:52.918333 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 03:44:52.918373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 03:44:52.918488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 03:44:52.918505 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 03:44:52.925473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 03:44:52.925500 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925505 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 03:44:52.925512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 03:44:52.925515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 03:44:52.925518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 03:44:52.926233 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 03:44:52.929662 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.369073 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvxrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84c9636d-a525-40e8-bc35-af07ecbdeafc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvxrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.387383 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7587993e5f7e3b62e3eb1456ca64e857ba0e4daf15fdea9c546c59f50f853517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.446980 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.447033 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.447047 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.447067 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.447078 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:19Z","lastTransitionTime":"2025-12-13T03:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.550666 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.550733 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.550757 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.550788 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.550814 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:19Z","lastTransitionTime":"2025-12-13T03:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.616115 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:19 crc kubenswrapper[4766]: E1213 03:45:19.616300 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.616370 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:19 crc kubenswrapper[4766]: E1213 03:45:19.616574 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.633562 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.647232 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.653324 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.653361 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.653373 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.653392 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.653416 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:19Z","lastTransitionTime":"2025-12-13T03:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.666449 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.683722 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.699552 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.711862 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.726628 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b40427c-c501-4c74-a7e3-2e6f1343bc03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a79778b7b5de56ba93a75c81deac1a836df2ffb1b94c804d0b630800663dd443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://455382a79dfb5c2f3d7ad8fef604af58cc2b4ff33c3c31ffe0961205c4368b23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ww5fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.743094 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.755354 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.755396 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.755406 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.755421 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.755446 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:19Z","lastTransitionTime":"2025-12-13T03:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.767624 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fb0d2712c18bfa5fe0348e7f0506275e7a452b99e3efe9fd16632831452ceed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9a47ec54cae6bb8656e42d032c74a0fbc1ede2ebd302ab768bc374cddfe441\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"message\\\":\\\"removal\\\\nI1213 03:45:16.356939 5954 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1213 03:45:16.356956 5954 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1213 03:45:16.356969 5954 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1213 03:45:16.356983 5954 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1213 03:45:16.357051 5954 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1213 03:45:16.357059 5954 handler.go:208] Removed *v1.Node event handler 7\\\\nI1213 03:45:16.357078 5954 handler.go:208] Removed *v1.Node event handler 2\\\\nI1213 03:45:16.357093 5954 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1213 03:45:16.357104 5954 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1213 03:45:16.357122 5954 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1213 03:45:16.357134 5954 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1213 03:45:16.357151 5954 factory.go:656] Stopping watch factory\\\\nI1213 03:45:16.357174 5954 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1213 03:45:16.357188 5954 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1213 03:45:16.357195 5954 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fb0d2712c18bfa5fe0348e7f0506275e7a452b99e3efe9fd16632831452ceed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-13T03:45:18Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:117\\\\nI1213 03:45:17.788044 6207 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1213 03:45:17.788725 6207 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1213 03:45:17.788754 6207 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1213 03:45:17.788763 6207 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1213 03:45:17.788775 6207 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1213 03:45:17.788780 6207 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1213 03:45:17.788816 6207 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1213 03:45:17.788856 6207 factory.go:656] Stopping watch factory\\\\nI1213 03:45:17.788874 6207 ovnkube.go:599] Stopped ovnkube\\\\nI1213 03:45:17.789027 6207 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1213 03:45:17.789038 6207 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1213 03:45:17.789044 6207 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1213 03:45:17.789050 6207 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1213 03:45:17.789056 6207 handler.go:208] Removed *v1.Node event handler 2\\\\nI1213 03:45:17.789063 6207 handler.go:208] Removed *v1.Node event handler 7\\\\nI1213 03:45:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.783276 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb2779e9-fd47-4ea6-a75c-b0c24339b1c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c82736d8d5e400aa9f77b6e3481b75b3127380fbfe2ab6b998360b771a41dc8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T03:44:53Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 03:44:52.250326 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 03:44:52.250536 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 03:44:52.251699 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1102693067/tls.crt::/tmp/serving-cert-1102693067/tls.key\\\\\\\"\\\\nI1213 03:44:52.913338 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 03:44:52.918333 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 03:44:52.918373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 03:44:52.918488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 03:44:52.918505 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 03:44:52.925473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 03:44:52.925500 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925505 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 03:44:52.925512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 03:44:52.925515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 03:44:52.925518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 03:44:52.926233 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 03:44:52.929662 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.797045 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.811499 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.826725 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.841133 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.858559 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.858612 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.858624 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.858646 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.858659 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:19Z","lastTransitionTime":"2025-12-13T03:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.861151 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7587993e5f7e3b62e3eb1456ca64e857ba0e4daf15fdea9c546c59f50f853517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.873630 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvxrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84c9636d-a525-40e8-bc35-af07ecbdeafc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvxrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.902757 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2dfkj_c2621562-4c91-40a3-ad72-29d325404496/ovnkube-controller/1.log" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.906809 4766 scope.go:117] "RemoveContainer" containerID="2fb0d2712c18bfa5fe0348e7f0506275e7a452b99e3efe9fd16632831452ceed" Dec 13 03:45:19 crc kubenswrapper[4766]: E1213 03:45:19.907012 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2dfkj_openshift-ovn-kubernetes(c2621562-4c91-40a3-ad72-29d325404496)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" podUID="c2621562-4c91-40a3-ad72-29d325404496" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.919818 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.931679 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.944064 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.957876 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b40427c-c501-4c74-a7e3-2e6f1343bc03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a79778b7b5de56ba93a75c81deac1a836df2ffb1b94c804d0b630800663dd443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://455382a79dfb5c2f3d7ad8fef604af58cc2b4ff33c3c31ffe0961205c4368b23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ww5fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.960976 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.961049 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.961063 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.961084 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.961097 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:19Z","lastTransitionTime":"2025-12-13T03:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.975307 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.988520 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:19 crc kubenswrapper[4766]: I1213 03:45:19.999485 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:19Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.014087 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:20Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.030157 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:20Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.051915 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fb0d2712c18bfa5fe0348e7f0506275e7a452b99e3efe9fd16632831452ceed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fb0d2712c18bfa5fe0348e7f0506275e7a452b99e3efe9fd16632831452ceed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-13T03:45:18Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:117\\\\nI1213 03:45:17.788044 6207 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1213 03:45:17.788725 6207 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1213 03:45:17.788754 6207 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1213 03:45:17.788763 6207 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1213 03:45:17.788775 6207 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1213 03:45:17.788780 6207 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1213 03:45:17.788816 6207 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1213 03:45:17.788856 6207 factory.go:656] Stopping watch factory\\\\nI1213 03:45:17.788874 6207 ovnkube.go:599] Stopped ovnkube\\\\nI1213 03:45:17.789027 6207 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1213 03:45:17.789038 6207 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1213 03:45:17.789044 6207 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1213 03:45:17.789050 6207 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1213 03:45:17.789056 6207 handler.go:208] Removed *v1.Node event handler 2\\\\nI1213 03:45:17.789063 6207 handler.go:208] Removed *v1.Node event handler 7\\\\nI1213 03:45:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2dfkj_openshift-ovn-kubernetes(c2621562-4c91-40a3-ad72-29d325404496)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:20Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.063781 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.063826 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.063838 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.063859 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.063872 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:20Z","lastTransitionTime":"2025-12-13T03:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.069060 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb2779e9-fd47-4ea6-a75c-b0c24339b1c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c82736d8d5e400aa9f77b6e3481b75b3127380fbfe2ab6b998360b771a41dc8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T03:44:53Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 03:44:52.250326 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 03:44:52.250536 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 03:44:52.251699 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1102693067/tls.crt::/tmp/serving-cert-1102693067/tls.key\\\\\\\"\\\\nI1213 03:44:52.913338 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 03:44:52.918333 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 03:44:52.918373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 03:44:52.918488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 03:44:52.918505 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 03:44:52.925473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 03:44:52.925500 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925505 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 03:44:52.925512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 03:44:52.925515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 03:44:52.925518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 03:44:52.926233 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 03:44:52.929662 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:20Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.087316 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:20Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.103811 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7587993e5f7e3b62e3eb1456ca64e857ba0e4daf15fdea9c546c59f50f853517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:20Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.117321 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvxrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84c9636d-a525-40e8-bc35-af07ecbdeafc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvxrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:20Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.130607 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:20Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.142446 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:20Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.166318 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.166361 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.166373 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.166389 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.166399 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:20Z","lastTransitionTime":"2025-12-13T03:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.268726 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.268764 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.268774 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.268791 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.268802 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:20Z","lastTransitionTime":"2025-12-13T03:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.371589 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.371631 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.371640 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.371656 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.371667 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:20Z","lastTransitionTime":"2025-12-13T03:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.474589 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.474650 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.474664 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.474686 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.474706 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:20Z","lastTransitionTime":"2025-12-13T03:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.577731 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.577775 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.577787 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.577810 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.577825 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:20Z","lastTransitionTime":"2025-12-13T03:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.615901 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.615971 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:20 crc kubenswrapper[4766]: E1213 03:45:20.616083 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:45:20 crc kubenswrapper[4766]: E1213 03:45:20.616256 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.680845 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.680899 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.680911 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.680931 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.680945 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:20Z","lastTransitionTime":"2025-12-13T03:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.783968 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.784022 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.784033 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.784051 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.784062 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:20Z","lastTransitionTime":"2025-12-13T03:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.886954 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.887003 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.887012 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.887028 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.887040 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:20Z","lastTransitionTime":"2025-12-13T03:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.990066 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.990114 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.990129 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.990150 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:20 crc kubenswrapper[4766]: I1213 03:45:20.990188 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:20Z","lastTransitionTime":"2025-12-13T03:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.092344 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.092398 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.092410 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.092431 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.092457 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:21Z","lastTransitionTime":"2025-12-13T03:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.195802 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.195852 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.195864 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.195884 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.195900 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:21Z","lastTransitionTime":"2025-12-13T03:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.299185 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.299817 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.300257 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.300692 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.300892 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:21Z","lastTransitionTime":"2025-12-13T03:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.403073 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.403120 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.403134 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.403156 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.403168 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:21Z","lastTransitionTime":"2025-12-13T03:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.506320 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.506619 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.506692 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.506773 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.506839 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:21Z","lastTransitionTime":"2025-12-13T03:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.609455 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.609577 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.609600 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.609620 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.609632 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:21Z","lastTransitionTime":"2025-12-13T03:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.615185 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:21 crc kubenswrapper[4766]: E1213 03:45:21.615296 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.615549 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:21 crc kubenswrapper[4766]: E1213 03:45:21.615813 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.713285 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.713341 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.713358 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.713383 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.713400 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:21Z","lastTransitionTime":"2025-12-13T03:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.815974 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.816601 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.816661 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.816707 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.816738 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:21Z","lastTransitionTime":"2025-12-13T03:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.919475 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.919561 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.919583 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.919620 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:21 crc kubenswrapper[4766]: I1213 03:45:21.919644 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:21Z","lastTransitionTime":"2025-12-13T03:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.022024 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.022075 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.022093 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.022113 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.022128 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:22Z","lastTransitionTime":"2025-12-13T03:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.124722 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.124754 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.124763 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.124779 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.124789 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:22Z","lastTransitionTime":"2025-12-13T03:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.228324 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.228365 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.228384 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.228407 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.228433 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:22Z","lastTransitionTime":"2025-12-13T03:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.332404 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.332476 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.332491 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.332509 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.332520 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:22Z","lastTransitionTime":"2025-12-13T03:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.435234 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.435277 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.435287 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.435306 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.435318 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:22Z","lastTransitionTime":"2025-12-13T03:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.538261 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.538306 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.538316 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.538333 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.538344 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:22Z","lastTransitionTime":"2025-12-13T03:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.616290 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.616351 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:22 crc kubenswrapper[4766]: E1213 03:45:22.616497 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:45:22 crc kubenswrapper[4766]: E1213 03:45:22.616659 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.642132 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.642187 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.642207 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.642236 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.642254 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:22Z","lastTransitionTime":"2025-12-13T03:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.746740 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.746790 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.746800 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.746824 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.746836 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:22Z","lastTransitionTime":"2025-12-13T03:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.850032 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.850082 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.850115 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.850134 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.850145 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:22Z","lastTransitionTime":"2025-12-13T03:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.953305 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.953356 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.953370 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.953387 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:22 crc kubenswrapper[4766]: I1213 03:45:22.953400 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:22Z","lastTransitionTime":"2025-12-13T03:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.057137 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.057189 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.057204 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.057227 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.057245 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:23Z","lastTransitionTime":"2025-12-13T03:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.160189 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.160230 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.160240 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.160258 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.160267 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:23Z","lastTransitionTime":"2025-12-13T03:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.263758 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.264103 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.264184 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.264276 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.264377 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:23Z","lastTransitionTime":"2025-12-13T03:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.367175 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.367227 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.367237 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.367257 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.367269 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:23Z","lastTransitionTime":"2025-12-13T03:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.469772 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.470140 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.470391 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.470593 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.470711 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:23Z","lastTransitionTime":"2025-12-13T03:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.573159 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.573203 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.573214 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.573231 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.573242 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:23Z","lastTransitionTime":"2025-12-13T03:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.616090 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:23 crc kubenswrapper[4766]: E1213 03:45:23.616291 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.616302 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:23 crc kubenswrapper[4766]: E1213 03:45:23.616594 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.676640 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.676687 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.676699 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.676718 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.676728 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:23Z","lastTransitionTime":"2025-12-13T03:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.779799 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.780080 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.780209 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.780390 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.780598 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:23Z","lastTransitionTime":"2025-12-13T03:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.835736 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.835789 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.835805 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.835824 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.835836 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:23Z","lastTransitionTime":"2025-12-13T03:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:23 crc kubenswrapper[4766]: E1213 03:45:23.849362 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:23Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.853298 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.853338 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.853350 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.853370 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.853390 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:23Z","lastTransitionTime":"2025-12-13T03:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:23 crc kubenswrapper[4766]: E1213 03:45:23.867222 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:23Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.871606 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.871640 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.871657 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.871683 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.871698 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:23Z","lastTransitionTime":"2025-12-13T03:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:23 crc kubenswrapper[4766]: E1213 03:45:23.884775 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:23Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.888787 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.888841 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.888852 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.888871 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.888885 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:23Z","lastTransitionTime":"2025-12-13T03:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:23 crc kubenswrapper[4766]: E1213 03:45:23.901465 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:23Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.907195 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.907299 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.907315 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.907361 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.907393 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:23Z","lastTransitionTime":"2025-12-13T03:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:23 crc kubenswrapper[4766]: E1213 03:45:23.921329 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:23Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:23 crc kubenswrapper[4766]: E1213 03:45:23.921508 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.922997 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.923041 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.923051 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.923069 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:23 crc kubenswrapper[4766]: I1213 03:45:23.923081 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:23Z","lastTransitionTime":"2025-12-13T03:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.026309 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.026368 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.026384 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.026407 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.026427 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:24Z","lastTransitionTime":"2025-12-13T03:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.129548 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.129599 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.129611 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.129633 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.129648 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:24Z","lastTransitionTime":"2025-12-13T03:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.232626 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.232673 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.232685 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.232705 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.232719 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:24Z","lastTransitionTime":"2025-12-13T03:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.335503 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.335551 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.335563 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.335580 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.335592 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:24Z","lastTransitionTime":"2025-12-13T03:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.437944 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.437993 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.438003 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.438025 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.438035 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:24Z","lastTransitionTime":"2025-12-13T03:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.541750 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.542186 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.542270 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.542421 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.542534 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:24Z","lastTransitionTime":"2025-12-13T03:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.615584 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:24 crc kubenswrapper[4766]: E1213 03:45:24.615785 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.615584 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:24 crc kubenswrapper[4766]: E1213 03:45:24.616019 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.646688 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.646748 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.646759 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.646782 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.646794 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:24Z","lastTransitionTime":"2025-12-13T03:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.749946 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.750277 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.750356 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.750466 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.750555 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:24Z","lastTransitionTime":"2025-12-13T03:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.853412 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.853482 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.853494 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.853517 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.853529 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:24Z","lastTransitionTime":"2025-12-13T03:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.956268 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.956334 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.956353 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.956379 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:24 crc kubenswrapper[4766]: I1213 03:45:24.956397 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:24Z","lastTransitionTime":"2025-12-13T03:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.059955 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.060013 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.060027 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.060050 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.060064 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:25Z","lastTransitionTime":"2025-12-13T03:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.163295 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.163627 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.163740 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.163859 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.164095 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:25Z","lastTransitionTime":"2025-12-13T03:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.266920 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.266972 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.266983 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.267002 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.267015 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:25Z","lastTransitionTime":"2025-12-13T03:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.369298 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.369351 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.369364 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.369386 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.369397 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:25Z","lastTransitionTime":"2025-12-13T03:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.472185 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.472234 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.472245 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.472265 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.472281 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:25Z","lastTransitionTime":"2025-12-13T03:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.575946 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.575999 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.576010 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.576028 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.576040 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:25Z","lastTransitionTime":"2025-12-13T03:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.615758 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.615841 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:25 crc kubenswrapper[4766]: E1213 03:45:25.616041 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:45:25 crc kubenswrapper[4766]: E1213 03:45:25.616185 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.678478 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.678547 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.678561 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.678590 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.678604 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:25Z","lastTransitionTime":"2025-12-13T03:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.781684 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.781745 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.781755 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.781771 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.781784 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:25Z","lastTransitionTime":"2025-12-13T03:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.885098 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.885171 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.885184 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.885206 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.885219 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:25Z","lastTransitionTime":"2025-12-13T03:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.989059 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.989171 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.989202 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.989240 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:25 crc kubenswrapper[4766]: I1213 03:45:25.989265 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:25Z","lastTransitionTime":"2025-12-13T03:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.092522 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.092605 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.092619 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.092642 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.092656 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:26Z","lastTransitionTime":"2025-12-13T03:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.195360 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.195406 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.195420 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.195460 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.195477 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:26Z","lastTransitionTime":"2025-12-13T03:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.297888 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.297947 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.297959 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.297978 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.297990 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:26Z","lastTransitionTime":"2025-12-13T03:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.401721 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.401775 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.401786 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.401810 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.401823 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:26Z","lastTransitionTime":"2025-12-13T03:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.481028 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84c9636d-a525-40e8-bc35-af07ecbdeafc-metrics-certs\") pod \"network-metrics-daemon-qvxrm\" (UID: \"84c9636d-a525-40e8-bc35-af07ecbdeafc\") " pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:26 crc kubenswrapper[4766]: E1213 03:45:26.481316 4766 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 03:45:26 crc kubenswrapper[4766]: E1213 03:45:26.481470 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84c9636d-a525-40e8-bc35-af07ecbdeafc-metrics-certs podName:84c9636d-a525-40e8-bc35-af07ecbdeafc nodeName:}" failed. No retries permitted until 2025-12-13 03:45:42.481417477 +0000 UTC m=+73.991350511 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84c9636d-a525-40e8-bc35-af07ecbdeafc-metrics-certs") pod "network-metrics-daemon-qvxrm" (UID: "84c9636d-a525-40e8-bc35-af07ecbdeafc") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.504850 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.504887 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.504898 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.504918 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.504929 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:26Z","lastTransitionTime":"2025-12-13T03:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.582984 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.595215 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.595945 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:26Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.607998 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.608055 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.608068 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.608087 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.608098 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:26Z","lastTransitionTime":"2025-12-13T03:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.609504 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:26Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.615659 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.615674 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:26 crc kubenswrapper[4766]: E1213 03:45:26.615819 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:45:26 crc kubenswrapper[4766]: E1213 03:45:26.615900 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.622889 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:26Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.638962 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:26Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.657340 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:26Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.670207 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:26Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.684070 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b40427c-c501-4c74-a7e3-2e6f1343bc03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a79778b7b5de56ba93a75c81deac1a836df2ffb1b94c804d0b630800663dd443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://455382a79dfb5c2f3d7ad8fef604af58cc2b4ff33c3c31ffe0961205c4368b23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ww5fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:26Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.699417 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb2779e9-fd47-4ea6-a75c-b0c24339b1c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c82736d8d5e400aa9f77b6e3481b75b3127380fbfe2ab6b998360b771a41dc8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T03:44:53Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 03:44:52.250326 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 03:44:52.250536 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 03:44:52.251699 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1102693067/tls.crt::/tmp/serving-cert-1102693067/tls.key\\\\\\\"\\\\nI1213 03:44:52.913338 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 03:44:52.918333 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 03:44:52.918373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 03:44:52.918488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 03:44:52.918505 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 03:44:52.925473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 03:44:52.925500 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925505 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 03:44:52.925512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 03:44:52.925515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 03:44:52.925518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 03:44:52.926233 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 03:44:52.929662 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:26Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.710492 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.710531 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.710544 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.710564 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.710577 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:26Z","lastTransitionTime":"2025-12-13T03:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.714515 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:26Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.730073 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:26Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.745152 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:26Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.760232 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:26Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.774998 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:26Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.796091 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fb0d2712c18bfa5fe0348e7f0506275e7a452b99e3efe9fd16632831452ceed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fb0d2712c18bfa5fe0348e7f0506275e7a452b99e3efe9fd16632831452ceed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-13T03:45:18Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:117\\\\nI1213 03:45:17.788044 6207 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1213 03:45:17.788725 6207 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1213 03:45:17.788754 6207 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1213 03:45:17.788763 6207 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1213 03:45:17.788775 6207 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1213 03:45:17.788780 6207 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1213 03:45:17.788816 6207 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1213 03:45:17.788856 6207 factory.go:656] Stopping watch factory\\\\nI1213 03:45:17.788874 6207 ovnkube.go:599] Stopped ovnkube\\\\nI1213 03:45:17.789027 6207 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1213 03:45:17.789038 6207 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1213 03:45:17.789044 6207 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1213 03:45:17.789050 6207 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1213 03:45:17.789056 6207 handler.go:208] Removed *v1.Node event handler 2\\\\nI1213 03:45:17.789063 6207 handler.go:208] Removed *v1.Node event handler 7\\\\nI1213 03:45:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2dfkj_openshift-ovn-kubernetes(c2621562-4c91-40a3-ad72-29d325404496)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:26Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.812839 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.812880 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.812892 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.812910 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.812921 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:26Z","lastTransitionTime":"2025-12-13T03:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.814219 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7587993e5f7e3b62e3eb1456ca64e857ba0e4daf15fdea9c546c59f50f853517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:26Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.828240 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvxrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84c9636d-a525-40e8-bc35-af07ecbdeafc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvxrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:26Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.916284 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.916335 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.916345 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.916363 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:26 crc kubenswrapper[4766]: I1213 03:45:26.916376 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:26Z","lastTransitionTime":"2025-12-13T03:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.019637 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.019725 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.019768 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.019797 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.019815 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:27Z","lastTransitionTime":"2025-12-13T03:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.122286 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.122674 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.122776 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.122885 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.122969 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:27Z","lastTransitionTime":"2025-12-13T03:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.226156 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.226200 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.226209 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.226230 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.226241 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:27Z","lastTransitionTime":"2025-12-13T03:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.329230 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.329270 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.329285 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.329305 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.329336 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:27Z","lastTransitionTime":"2025-12-13T03:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.432305 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.432341 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.432351 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.432366 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.432377 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:27Z","lastTransitionTime":"2025-12-13T03:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.536238 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.536312 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.536330 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.536357 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.536373 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:27Z","lastTransitionTime":"2025-12-13T03:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.615401 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.615541 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:27 crc kubenswrapper[4766]: E1213 03:45:27.615695 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:45:27 crc kubenswrapper[4766]: E1213 03:45:27.615991 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.640132 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.640627 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.640810 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.640957 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.641112 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:27Z","lastTransitionTime":"2025-12-13T03:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.743990 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.744022 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.744031 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.744049 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.744060 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:27Z","lastTransitionTime":"2025-12-13T03:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.847158 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.847213 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.847234 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.847253 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.847266 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:27Z","lastTransitionTime":"2025-12-13T03:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.950195 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.950254 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.950266 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.950286 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:27 crc kubenswrapper[4766]: I1213 03:45:27.950299 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:27Z","lastTransitionTime":"2025-12-13T03:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.053312 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.053711 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.053815 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.053904 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.053989 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:28Z","lastTransitionTime":"2025-12-13T03:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.156828 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.156893 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.156910 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.156929 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.156941 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:28Z","lastTransitionTime":"2025-12-13T03:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.259587 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.259617 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.259625 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.259641 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.259651 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:28Z","lastTransitionTime":"2025-12-13T03:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.361624 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.362017 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.362133 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.362328 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.362433 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:28Z","lastTransitionTime":"2025-12-13T03:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.465258 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.465577 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.465682 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.465796 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.465879 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:28Z","lastTransitionTime":"2025-12-13T03:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.568390 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.569399 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.569422 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.569465 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.569475 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:28Z","lastTransitionTime":"2025-12-13T03:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.615671 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.615671 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:28 crc kubenswrapper[4766]: E1213 03:45:28.615832 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:45:28 crc kubenswrapper[4766]: E1213 03:45:28.615855 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.672008 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.672071 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.672087 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.672109 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.672126 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:28Z","lastTransitionTime":"2025-12-13T03:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.775913 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.775974 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.775988 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.776006 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.776018 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:28Z","lastTransitionTime":"2025-12-13T03:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.879221 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.879781 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.879901 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.880018 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.880131 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:28Z","lastTransitionTime":"2025-12-13T03:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.983600 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.983963 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.984065 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.984171 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:28 crc kubenswrapper[4766]: I1213 03:45:28.984264 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:28Z","lastTransitionTime":"2025-12-13T03:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.087268 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.087318 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.087330 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.087349 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.087362 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:29Z","lastTransitionTime":"2025-12-13T03:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.189936 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.190257 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.190336 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.190420 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.190512 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:29Z","lastTransitionTime":"2025-12-13T03:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.292990 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.293030 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.293044 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.293063 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.293075 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:29Z","lastTransitionTime":"2025-12-13T03:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.395524 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.395571 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.395583 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.395605 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.395618 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:29Z","lastTransitionTime":"2025-12-13T03:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.498739 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.498803 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.498817 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.498836 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.498847 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:29Z","lastTransitionTime":"2025-12-13T03:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.518503 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.518671 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:29 crc kubenswrapper[4766]: E1213 03:45:29.518696 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:46:01.518672706 +0000 UTC m=+93.028605670 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.518731 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.518764 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:29 crc kubenswrapper[4766]: E1213 03:45:29.518798 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 03:45:29 crc kubenswrapper[4766]: E1213 03:45:29.518821 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 03:45:29 crc kubenswrapper[4766]: E1213 03:45:29.518840 4766 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 03:45:29 crc kubenswrapper[4766]: E1213 03:45:29.518875 4766 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 03:45:29 crc kubenswrapper[4766]: E1213 03:45:29.518889 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-13 03:46:01.518876512 +0000 UTC m=+93.028809476 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.518798 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:29 crc kubenswrapper[4766]: E1213 03:45:29.518915 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 03:46:01.518904553 +0000 UTC m=+93.028837517 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 03:45:29 crc kubenswrapper[4766]: E1213 03:45:29.518930 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 03:45:29 crc kubenswrapper[4766]: E1213 03:45:29.519029 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 03:45:29 crc kubenswrapper[4766]: E1213 03:45:29.519050 4766 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 03:45:29 crc kubenswrapper[4766]: E1213 03:45:29.519087 4766 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 03:45:29 crc kubenswrapper[4766]: E1213 03:45:29.519120 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-13 03:46:01.519110438 +0000 UTC m=+93.029043402 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 03:45:29 crc kubenswrapper[4766]: E1213 03:45:29.519300 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 03:46:01.519254863 +0000 UTC m=+93.029187867 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.601270 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.601314 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.601324 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.601341 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.601354 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:29Z","lastTransitionTime":"2025-12-13T03:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.615645 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.615750 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:29 crc kubenswrapper[4766]: E1213 03:45:29.615801 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:45:29 crc kubenswrapper[4766]: E1213 03:45:29.615943 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.630288 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c29363fc-e8a6-40d5-a73f-4bac6b47f073\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2c24d5ee6235813ce84ab1be241b9812df0541d5caefec1daa25ff912d31e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a615f91bcd70b5108f8a100a56aba2d311a265d13e8d2468f56720d88cab3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467ab5b59595a4d878501ba9fd0d98d3fbee40937722bf79484c10d1bf230253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81bcb7b38fd64f78f1d79a333a8ec77d630c276d966dd603b39b0f3eaa14310d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81bcb7b38fd64f78f1d79a333a8ec77d630c276d966dd603b39b0f3eaa14310d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:29Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.643168 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:29Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.654561 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:29Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.666511 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b40427c-c501-4c74-a7e3-2e6f1343bc03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a79778b7b5de56ba93a75c81deac1a836df2ffb1b94c804d0b630800663dd443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://455382a79dfb5c2f3d7ad8fef604af58cc2b4ff33c3c31ffe0961205c4368b23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ww5fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:29Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.679699 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:29Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.692512 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:29Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.704955 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.705041 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.705057 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.705081 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.705121 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:29Z","lastTransitionTime":"2025-12-13T03:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.706510 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:29Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.716940 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:29Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.728536 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:29Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.739417 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:29Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.761543 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fb0d2712c18bfa5fe0348e7f0506275e7a452b99e3efe9fd16632831452ceed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fb0d2712c18bfa5fe0348e7f0506275e7a452b99e3efe9fd16632831452ceed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-13T03:45:18Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:117\\\\nI1213 03:45:17.788044 6207 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1213 03:45:17.788725 6207 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1213 03:45:17.788754 6207 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1213 03:45:17.788763 6207 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1213 03:45:17.788775 6207 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1213 03:45:17.788780 6207 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1213 03:45:17.788816 6207 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1213 03:45:17.788856 6207 factory.go:656] Stopping watch factory\\\\nI1213 03:45:17.788874 6207 ovnkube.go:599] Stopped ovnkube\\\\nI1213 03:45:17.789027 6207 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1213 03:45:17.789038 6207 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1213 03:45:17.789044 6207 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1213 03:45:17.789050 6207 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1213 03:45:17.789056 6207 handler.go:208] Removed *v1.Node event handler 2\\\\nI1213 03:45:17.789063 6207 handler.go:208] Removed *v1.Node event handler 7\\\\nI1213 03:45:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2dfkj_openshift-ovn-kubernetes(c2621562-4c91-40a3-ad72-29d325404496)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:29Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.777267 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb2779e9-fd47-4ea6-a75c-b0c24339b1c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c82736d8d5e400aa9f77b6e3481b75b3127380fbfe2ab6b998360b771a41dc8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T03:44:53Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 03:44:52.250326 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 03:44:52.250536 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 03:44:52.251699 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1102693067/tls.crt::/tmp/serving-cert-1102693067/tls.key\\\\\\\"\\\\nI1213 03:44:52.913338 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 03:44:52.918333 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 03:44:52.918373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 03:44:52.918488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 03:44:52.918505 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 03:44:52.925473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 03:44:52.925500 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925505 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 03:44:52.925512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 03:44:52.925515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 03:44:52.925518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 03:44:52.926233 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 03:44:52.929662 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:29Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.790336 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:29Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.804142 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:29Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.807209 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.807246 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.807256 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.807294 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.807314 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:29Z","lastTransitionTime":"2025-12-13T03:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.815743 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:29Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.829838 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7587993e5f7e3b62e3eb1456ca64e857ba0e4daf15fdea9c546c59f50f853517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:29Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.842099 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvxrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84c9636d-a525-40e8-bc35-af07ecbdeafc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvxrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:29Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.909691 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.909729 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.909741 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.909761 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:29 crc kubenswrapper[4766]: I1213 03:45:29.909773 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:29Z","lastTransitionTime":"2025-12-13T03:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.012963 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.013364 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.013486 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.013572 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.013703 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:30Z","lastTransitionTime":"2025-12-13T03:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.117402 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.117530 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.117545 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.117570 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.117587 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:30Z","lastTransitionTime":"2025-12-13T03:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.221130 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.221179 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.221192 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.221212 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.221225 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:30Z","lastTransitionTime":"2025-12-13T03:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.323407 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.323501 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.323512 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.323530 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.323541 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:30Z","lastTransitionTime":"2025-12-13T03:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.427636 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.427703 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.427716 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.427736 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.427747 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:30Z","lastTransitionTime":"2025-12-13T03:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.530087 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.530142 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.530152 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.530173 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.530188 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:30Z","lastTransitionTime":"2025-12-13T03:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.616160 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.616226 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:30 crc kubenswrapper[4766]: E1213 03:45:30.616478 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:45:30 crc kubenswrapper[4766]: E1213 03:45:30.616634 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.632792 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.632822 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.632830 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.632849 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.632860 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:30Z","lastTransitionTime":"2025-12-13T03:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.736036 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.736074 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.736084 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.736101 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.736112 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:30Z","lastTransitionTime":"2025-12-13T03:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.838714 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.839006 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.839119 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.839299 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.839395 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:30Z","lastTransitionTime":"2025-12-13T03:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.941973 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.942042 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.942057 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.942080 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:30 crc kubenswrapper[4766]: I1213 03:45:30.942095 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:30Z","lastTransitionTime":"2025-12-13T03:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.044645 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.044689 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.044721 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.044738 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.044750 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:31Z","lastTransitionTime":"2025-12-13T03:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.147498 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.147551 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.147565 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.147588 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.147604 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:31Z","lastTransitionTime":"2025-12-13T03:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.251073 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.251107 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.251116 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.251132 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.251144 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:31Z","lastTransitionTime":"2025-12-13T03:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.353843 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.353875 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.353884 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.353902 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.353914 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:31Z","lastTransitionTime":"2025-12-13T03:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.456726 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.456768 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.456780 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.456801 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.456814 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:31Z","lastTransitionTime":"2025-12-13T03:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.560095 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.560143 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.560152 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.560169 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.560179 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:31Z","lastTransitionTime":"2025-12-13T03:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.616645 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:31 crc kubenswrapper[4766]: E1213 03:45:31.616815 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.616906 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:31 crc kubenswrapper[4766]: E1213 03:45:31.617083 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.663283 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.663328 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.663338 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.663356 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.663368 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:31Z","lastTransitionTime":"2025-12-13T03:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.765811 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.765866 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.765876 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.765895 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.765906 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:31Z","lastTransitionTime":"2025-12-13T03:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.868749 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.868803 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.868827 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.868849 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.868862 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:31Z","lastTransitionTime":"2025-12-13T03:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.972115 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.972177 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.972190 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.972216 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:31 crc kubenswrapper[4766]: I1213 03:45:31.972228 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:31Z","lastTransitionTime":"2025-12-13T03:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.075103 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.075180 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.075194 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.075218 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.075232 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:32Z","lastTransitionTime":"2025-12-13T03:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.182083 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.182146 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.182159 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.182183 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.182196 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:32Z","lastTransitionTime":"2025-12-13T03:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.284805 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.284853 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.284866 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.284886 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.284899 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:32Z","lastTransitionTime":"2025-12-13T03:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.387312 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.387369 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.387382 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.387400 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.387412 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:32Z","lastTransitionTime":"2025-12-13T03:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.490216 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.490260 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.490270 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.490288 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.490301 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:32Z","lastTransitionTime":"2025-12-13T03:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.593568 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.593847 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.593940 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.594080 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.594178 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:32Z","lastTransitionTime":"2025-12-13T03:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.616277 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.616381 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:32 crc kubenswrapper[4766]: E1213 03:45:32.616509 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:45:32 crc kubenswrapper[4766]: E1213 03:45:32.616683 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.697292 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.697341 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.697351 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.697372 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.697384 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:32Z","lastTransitionTime":"2025-12-13T03:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.800888 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.800942 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.800956 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.800978 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.800991 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:32Z","lastTransitionTime":"2025-12-13T03:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.905070 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.905130 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.905143 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.905165 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:32 crc kubenswrapper[4766]: I1213 03:45:32.905184 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:32Z","lastTransitionTime":"2025-12-13T03:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.009110 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.009166 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.009180 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.009204 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.009219 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:33Z","lastTransitionTime":"2025-12-13T03:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.112214 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.112251 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.112264 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.112282 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.112294 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:33Z","lastTransitionTime":"2025-12-13T03:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.215365 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.215414 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.215442 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.215462 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.215477 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:33Z","lastTransitionTime":"2025-12-13T03:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.318197 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.318247 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.318256 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.318277 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.318291 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:33Z","lastTransitionTime":"2025-12-13T03:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.421493 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.421553 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.421583 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.421601 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.421610 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:33Z","lastTransitionTime":"2025-12-13T03:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.524726 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.524783 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.524881 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.524913 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.524927 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:33Z","lastTransitionTime":"2025-12-13T03:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.615899 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.615950 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:33 crc kubenswrapper[4766]: E1213 03:45:33.616109 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:45:33 crc kubenswrapper[4766]: E1213 03:45:33.616373 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.628213 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.628251 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.628261 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.628277 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.628289 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:33Z","lastTransitionTime":"2025-12-13T03:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.730719 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.730790 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.730800 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.730818 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.730833 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:33Z","lastTransitionTime":"2025-12-13T03:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.833549 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.833594 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.833604 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.833619 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.833629 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:33Z","lastTransitionTime":"2025-12-13T03:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.936634 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.936688 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.936701 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.936719 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:33 crc kubenswrapper[4766]: I1213 03:45:33.936731 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:33Z","lastTransitionTime":"2025-12-13T03:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.036732 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.036779 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.036788 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.036804 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.036837 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:34Z","lastTransitionTime":"2025-12-13T03:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:34 crc kubenswrapper[4766]: E1213 03:45:34.057674 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:34Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.065196 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.065246 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.065260 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.065282 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.065294 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:34Z","lastTransitionTime":"2025-12-13T03:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:34 crc kubenswrapper[4766]: E1213 03:45:34.079003 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:34Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.084017 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.084074 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.084088 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.084107 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.084119 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:34Z","lastTransitionTime":"2025-12-13T03:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:34 crc kubenswrapper[4766]: E1213 03:45:34.097387 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:34Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.102190 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.102232 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.102243 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.102261 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.102271 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:34Z","lastTransitionTime":"2025-12-13T03:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:34 crc kubenswrapper[4766]: E1213 03:45:34.123229 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:34Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.128560 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.128640 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.128662 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.128683 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.128695 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:34Z","lastTransitionTime":"2025-12-13T03:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:34 crc kubenswrapper[4766]: E1213 03:45:34.154167 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:34Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:34 crc kubenswrapper[4766]: E1213 03:45:34.154420 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.157571 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.157648 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.157663 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.157687 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.157702 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:34Z","lastTransitionTime":"2025-12-13T03:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.261164 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.261239 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.261251 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.261282 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.261296 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:34Z","lastTransitionTime":"2025-12-13T03:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.364204 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.364257 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.364269 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.364289 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.364299 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:34Z","lastTransitionTime":"2025-12-13T03:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.467404 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.467494 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.467508 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.467529 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.467542 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:34Z","lastTransitionTime":"2025-12-13T03:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.569798 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.569840 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.569867 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.569891 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.569907 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:34Z","lastTransitionTime":"2025-12-13T03:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.615421 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.615633 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:34 crc kubenswrapper[4766]: E1213 03:45:34.615734 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:45:34 crc kubenswrapper[4766]: E1213 03:45:34.616228 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.616839 4766 scope.go:117] "RemoveContainer" containerID="2fb0d2712c18bfa5fe0348e7f0506275e7a452b99e3efe9fd16632831452ceed" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.672729 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.673296 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.673311 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.673333 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.673347 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:34Z","lastTransitionTime":"2025-12-13T03:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.775449 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.775491 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.775507 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.775527 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.775540 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:34Z","lastTransitionTime":"2025-12-13T03:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.877960 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.878012 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.878027 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.878047 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.878059 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:34Z","lastTransitionTime":"2025-12-13T03:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.961863 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2dfkj_c2621562-4c91-40a3-ad72-29d325404496/ovnkube-controller/1.log" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.964981 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" event={"ID":"c2621562-4c91-40a3-ad72-29d325404496","Type":"ContainerStarted","Data":"0bd1244423f54b79ded5573e82039b04eef50bf8c9c9dcac8891d356a3933358"} Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.965506 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.980655 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.980706 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.980715 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.980734 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.980747 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:34Z","lastTransitionTime":"2025-12-13T03:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:34 crc kubenswrapper[4766]: I1213 03:45:34.984255 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:34Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.290062 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:34Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.292389 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.292455 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.292471 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.292490 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.292503 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:35Z","lastTransitionTime":"2025-12-13T03:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.310058 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c29363fc-e8a6-40d5-a73f-4bac6b47f073\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2c24d5ee6235813ce84ab1be241b9812df0541d5caefec1daa25ff912d31e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a615f91bcd70b5108f8a100a56aba2d311a265d13e8d2468f56720d88cab3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467ab5b59595a4d878501ba9fd0d98d3fbee40937722bf79484c10d1bf230253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81bcb7b38fd64f78f1d79a333a8ec77d630c276d966dd603b39b0f3eaa14310d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81bcb7b38fd64f78f1d79a333a8ec77d630c276d966dd603b39b0f3eaa14310d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:35Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.331655 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:35Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.348406 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:35Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.394829 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.394873 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.394886 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.394907 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.394918 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:35Z","lastTransitionTime":"2025-12-13T03:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.429911 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:35Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.458316 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:35Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.482217 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b40427c-c501-4c74-a7e3-2e6f1343bc03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a79778b7b5de56ba93a75c81deac1a836df2ffb1b94c804d0b630800663dd443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://455382a79dfb5c2f3d7ad8fef604af58cc2b4ff33c3c31ffe0961205c4368b23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ww5fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:35Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.498257 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.498296 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.498306 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.498325 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.498336 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:35Z","lastTransitionTime":"2025-12-13T03:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.524889 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:35Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.540455 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:35Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.555960 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:35Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.573342 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:35Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.587980 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:35Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.601133 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.601202 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.601219 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.601245 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.601268 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:35Z","lastTransitionTime":"2025-12-13T03:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.620783 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.620848 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:35 crc kubenswrapper[4766]: E1213 03:45:35.620975 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.620880 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd1244423f54b79ded5573e82039b04eef50bf8c9c9dcac8891d356a3933358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fb0d2712c18bfa5fe0348e7f0506275e7a452b99e3efe9fd16632831452ceed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-13T03:45:18Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:117\\\\nI1213 03:45:17.788044 6207 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1213 03:45:17.788725 6207 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1213 03:45:17.788754 6207 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1213 03:45:17.788763 6207 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1213 03:45:17.788775 6207 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1213 03:45:17.788780 6207 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1213 03:45:17.788816 6207 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1213 03:45:17.788856 6207 factory.go:656] Stopping watch factory\\\\nI1213 03:45:17.788874 6207 ovnkube.go:599] Stopped ovnkube\\\\nI1213 03:45:17.789027 6207 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1213 03:45:17.789038 6207 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1213 03:45:17.789044 6207 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1213 03:45:17.789050 6207 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1213 03:45:17.789056 6207 handler.go:208] Removed *v1.Node event handler 2\\\\nI1213 03:45:17.789063 6207 handler.go:208] Removed *v1.Node event handler 7\\\\nI1213 03:45:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:35Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:35 crc kubenswrapper[4766]: E1213 03:45:35.621106 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.649691 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb2779e9-fd47-4ea6-a75c-b0c24339b1c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c82736d8d5e400aa9f77b6e3481b75b3127380fbfe2ab6b998360b771a41dc8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T03:44:53Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 03:44:52.250326 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 03:44:52.250536 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 03:44:52.251699 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1102693067/tls.crt::/tmp/serving-cert-1102693067/tls.key\\\\\\\"\\\\nI1213 03:44:52.913338 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 03:44:52.918333 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 03:44:52.918373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 03:44:52.918488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 03:44:52.918505 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 03:44:52.925473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 03:44:52.925500 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925505 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 03:44:52.925512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 03:44:52.925515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 03:44:52.925518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 03:44:52.926233 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 03:44:52.929662 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:35Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.667567 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvxrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84c9636d-a525-40e8-bc35-af07ecbdeafc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvxrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:35Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.732962 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.733019 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.733032 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.733056 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.733274 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:35Z","lastTransitionTime":"2025-12-13T03:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.788486 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7587993e5f7e3b62e3eb1456ca64e857ba0e4daf15fdea9c546c59f50f853517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:35Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.936916 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.936977 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.936994 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.937019 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:35 crc kubenswrapper[4766]: I1213 03:45:35.937035 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:35Z","lastTransitionTime":"2025-12-13T03:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.039126 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.039193 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.039205 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.039226 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.039241 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:36Z","lastTransitionTime":"2025-12-13T03:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.142111 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.142556 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.142734 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.142818 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.142931 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:36Z","lastTransitionTime":"2025-12-13T03:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.247957 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.248017 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.248035 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.248057 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.248070 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:36Z","lastTransitionTime":"2025-12-13T03:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.353050 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.353097 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.353110 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.353129 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.353144 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:36Z","lastTransitionTime":"2025-12-13T03:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.456032 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.456074 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.456084 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.456104 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.456114 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:36Z","lastTransitionTime":"2025-12-13T03:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.844605 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:36 crc kubenswrapper[4766]: E1213 03:45:36.844778 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.845484 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.845648 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:36 crc kubenswrapper[4766]: E1213 03:45:36.845949 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.845987 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.846022 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.846034 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.846059 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.846070 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:36Z","lastTransitionTime":"2025-12-13T03:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:36 crc kubenswrapper[4766]: E1213 03:45:36.845638 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.853201 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.875454 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:36Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.888936 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:36Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.903150 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:36Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.919908 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:36Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.945714 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd1244423f54b79ded5573e82039b04eef50bf8c9c9dcac8891d356a3933358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fb0d2712c18bfa5fe0348e7f0506275e7a452b99e3efe9fd16632831452ceed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-13T03:45:18Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:117\\\\nI1213 03:45:17.788044 6207 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1213 03:45:17.788725 6207 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1213 03:45:17.788754 6207 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1213 03:45:17.788763 6207 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1213 03:45:17.788775 6207 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1213 03:45:17.788780 6207 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1213 03:45:17.788816 6207 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1213 03:45:17.788856 6207 factory.go:656] Stopping watch factory\\\\nI1213 03:45:17.788874 6207 ovnkube.go:599] Stopped ovnkube\\\\nI1213 03:45:17.789027 6207 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1213 03:45:17.789038 6207 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1213 03:45:17.789044 6207 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1213 03:45:17.789050 6207 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1213 03:45:17.789056 6207 handler.go:208] Removed *v1.Node event handler 2\\\\nI1213 03:45:17.789063 6207 handler.go:208] Removed *v1.Node event handler 7\\\\nI1213 03:45:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:36Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.948533 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.948582 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.948596 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.948615 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.948629 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:36Z","lastTransitionTime":"2025-12-13T03:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.963595 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb2779e9-fd47-4ea6-a75c-b0c24339b1c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c82736d8d5e400aa9f77b6e3481b75b3127380fbfe2ab6b998360b771a41dc8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T03:44:53Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 03:44:52.250326 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 03:44:52.250536 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 03:44:52.251699 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1102693067/tls.crt::/tmp/serving-cert-1102693067/tls.key\\\\\\\"\\\\nI1213 03:44:52.913338 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 03:44:52.918333 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 03:44:52.918373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 03:44:52.918488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 03:44:52.918505 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 03:44:52.925473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 03:44:52.925500 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925505 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 03:44:52.925512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 03:44:52.925515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 03:44:52.925518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 03:44:52.926233 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 03:44:52.929662 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:36Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:36 crc kubenswrapper[4766]: I1213 03:45:36.985937 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:36Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.002068 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7587993e5f7e3b62e3eb1456ca64e857ba0e4daf15fdea9c546c59f50f853517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:36Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.016705 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvxrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84c9636d-a525-40e8-bc35-af07ecbdeafc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvxrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:37Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.030008 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:37Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.044679 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c29363fc-e8a6-40d5-a73f-4bac6b47f073\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2c24d5ee6235813ce84ab1be241b9812df0541d5caefec1daa25ff912d31e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a615f91bcd70b5108f8a100a56aba2d311a265d13e8d2468f56720d88cab3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467ab5b59595a4d878501ba9fd0d98d3fbee40937722bf79484c10d1bf230253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81bcb7b38fd64f78f1d79a333a8ec77d630c276d966dd603b39b0f3eaa14310d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81bcb7b38fd64f78f1d79a333a8ec77d630c276d966dd603b39b0f3eaa14310d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:37Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.051250 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.051312 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.051321 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.051339 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.051351 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:37Z","lastTransitionTime":"2025-12-13T03:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.060483 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:37Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.081560 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:37Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.166827 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.166887 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.166904 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.166929 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.166948 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:37Z","lastTransitionTime":"2025-12-13T03:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.184708 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:37Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.203595 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:37Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.224134 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b40427c-c501-4c74-a7e3-2e6f1343bc03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a79778b7b5de56ba93a75c81deac1a836df2ffb1b94c804d0b630800663dd443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://455382a79dfb5c2f3d7ad8fef604af58cc2b4ff33c3c31ffe0961205c4368b23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ww5fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:37Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.248288 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:37Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.271032 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.271107 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.271131 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.271159 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.271178 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:37Z","lastTransitionTime":"2025-12-13T03:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.373971 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.374039 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.374052 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.374073 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.374090 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:37Z","lastTransitionTime":"2025-12-13T03:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.477524 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.477567 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.477577 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.477595 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.477606 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:37Z","lastTransitionTime":"2025-12-13T03:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.580263 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.580310 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.580322 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.580346 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.580385 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:37Z","lastTransitionTime":"2025-12-13T03:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.615638 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:37 crc kubenswrapper[4766]: E1213 03:45:37.615902 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.683704 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.683765 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.683778 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.683801 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.683820 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:37Z","lastTransitionTime":"2025-12-13T03:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.807498 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.807547 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.807569 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.807595 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.807632 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:37Z","lastTransitionTime":"2025-12-13T03:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.911039 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.911082 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.911092 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.911124 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.911136 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:37Z","lastTransitionTime":"2025-12-13T03:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.976773 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2dfkj_c2621562-4c91-40a3-ad72-29d325404496/ovnkube-controller/2.log" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.977451 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2dfkj_c2621562-4c91-40a3-ad72-29d325404496/ovnkube-controller/1.log" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.980375 4766 generic.go:334] "Generic (PLEG): container finished" podID="c2621562-4c91-40a3-ad72-29d325404496" containerID="0bd1244423f54b79ded5573e82039b04eef50bf8c9c9dcac8891d356a3933358" exitCode=1 Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.980489 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" event={"ID":"c2621562-4c91-40a3-ad72-29d325404496","Type":"ContainerDied","Data":"0bd1244423f54b79ded5573e82039b04eef50bf8c9c9dcac8891d356a3933358"} Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.980595 4766 scope.go:117] "RemoveContainer" containerID="2fb0d2712c18bfa5fe0348e7f0506275e7a452b99e3efe9fd16632831452ceed" Dec 13 03:45:37 crc kubenswrapper[4766]: I1213 03:45:37.981699 4766 scope.go:117] "RemoveContainer" containerID="0bd1244423f54b79ded5573e82039b04eef50bf8c9c9dcac8891d356a3933358" Dec 13 03:45:37 crc kubenswrapper[4766]: E1213 03:45:37.981935 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2dfkj_openshift-ovn-kubernetes(c2621562-4c91-40a3-ad72-29d325404496)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" podUID="c2621562-4c91-40a3-ad72-29d325404496" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.001211 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:37Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.013354 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.013415 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.013449 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.013473 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.013487 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:38Z","lastTransitionTime":"2025-12-13T03:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.013359 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:38Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.027246 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b40427c-c501-4c74-a7e3-2e6f1343bc03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a79778b7b5de56ba93a75c81deac1a836df2ffb1b94c804d0b630800663dd443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://455382a79dfb5c2f3d7ad8fef604af58cc2b4ff33c3c31ffe0961205c4368b23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ww5fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:38Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.041274 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:38Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.059200 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:38Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.076156 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:38Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.089840 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:38Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.105712 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:38Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.117342 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.117395 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.117409 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.117452 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.117465 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:38Z","lastTransitionTime":"2025-12-13T03:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.121374 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:38Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.143812 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd1244423f54b79ded5573e82039b04eef50bf8c9c9dcac8891d356a3933358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fb0d2712c18bfa5fe0348e7f0506275e7a452b99e3efe9fd16632831452ceed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-13T03:45:18Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:117\\\\nI1213 03:45:17.788044 6207 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1213 03:45:17.788725 6207 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1213 03:45:17.788754 6207 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1213 03:45:17.788763 6207 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1213 03:45:17.788775 6207 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1213 03:45:17.788780 6207 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1213 03:45:17.788816 6207 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1213 03:45:17.788856 6207 factory.go:656] Stopping watch factory\\\\nI1213 03:45:17.788874 6207 ovnkube.go:599] Stopped ovnkube\\\\nI1213 03:45:17.789027 6207 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1213 03:45:17.789038 6207 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1213 03:45:17.789044 6207 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1213 03:45:17.789050 6207 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1213 03:45:17.789056 6207 handler.go:208] Removed *v1.Node event handler 2\\\\nI1213 03:45:17.789063 6207 handler.go:208] Removed *v1.Node event handler 7\\\\nI1213 03:45:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd1244423f54b79ded5573e82039b04eef50bf8c9c9dcac8891d356a3933358\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-13T03:45:37Z\\\",\\\"message\\\":\\\"removal\\\\nI1213 03:45:37.078702 6548 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1213 03:45:37.078741 6548 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1213 03:45:37.078794 6548 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1213 03:45:37.078804 6548 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1213 03:45:37.078843 6548 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1213 03:45:37.078856 6548 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1213 03:45:37.078863 6548 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1213 03:45:37.078870 6548 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1213 03:45:37.078875 6548 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1213 03:45:37.078990 6548 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1213 03:45:37.079038 6548 factory.go:656] Stopping watch factory\\\\nI1213 03:45:37.079051 6548 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1213 03:45:37.079091 6548 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:38Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.163096 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb2779e9-fd47-4ea6-a75c-b0c24339b1c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c82736d8d5e400aa9f77b6e3481b75b3127380fbfe2ab6b998360b771a41dc8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T03:44:53Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 03:44:52.250326 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 03:44:52.250536 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 03:44:52.251699 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1102693067/tls.crt::/tmp/serving-cert-1102693067/tls.key\\\\\\\"\\\\nI1213 03:44:52.913338 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 03:44:52.918333 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 03:44:52.918373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 03:44:52.918488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 03:44:52.918505 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 03:44:52.925473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 03:44:52.925500 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925505 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 03:44:52.925512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 03:44:52.925515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 03:44:52.925518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 03:44:52.926233 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 03:44:52.929662 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:38Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.178783 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:38Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.194455 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7587993e5f7e3b62e3eb1456ca64e857ba0e4daf15fdea9c546c59f50f853517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:38Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.204415 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvxrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84c9636d-a525-40e8-bc35-af07ecbdeafc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvxrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:38Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.231543 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.231591 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.231632 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.231660 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.231675 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:38Z","lastTransitionTime":"2025-12-13T03:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.235049 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c29363fc-e8a6-40d5-a73f-4bac6b47f073\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2c24d5ee6235813ce84ab1be241b9812df0541d5caefec1daa25ff912d31e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a615f91bcd70b5108f8a100a56aba2d311a265d13e8d2468f56720d88cab3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467ab5b59595a4d878501ba9fd0d98d3fbee40937722bf79484c10d1bf230253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81bcb7b38fd64f78f1d79a333a8ec77d630c276d966dd603b39b0f3eaa14310d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81bcb7b38fd64f78f1d79a333a8ec77d630c276d966dd603b39b0f3eaa14310d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:38Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.249270 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:38Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.262905 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:38Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.335926 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.335975 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.335987 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.336013 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.336032 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:38Z","lastTransitionTime":"2025-12-13T03:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.439498 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.439543 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.439566 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.439583 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.439595 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:38Z","lastTransitionTime":"2025-12-13T03:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.543028 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.543111 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.543128 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.543150 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.543163 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:38Z","lastTransitionTime":"2025-12-13T03:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.616017 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.616095 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.616017 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:38 crc kubenswrapper[4766]: E1213 03:45:38.616215 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:45:38 crc kubenswrapper[4766]: E1213 03:45:38.616287 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:45:38 crc kubenswrapper[4766]: E1213 03:45:38.616608 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.646949 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.647021 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.647052 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.647090 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.647121 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:38Z","lastTransitionTime":"2025-12-13T03:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.749694 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.749785 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.749803 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.749825 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.749840 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:38Z","lastTransitionTime":"2025-12-13T03:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.852636 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.852694 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.852717 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.852738 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.852751 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:38Z","lastTransitionTime":"2025-12-13T03:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.955580 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.955635 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.955659 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.955676 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.955686 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:38Z","lastTransitionTime":"2025-12-13T03:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:38 crc kubenswrapper[4766]: I1213 03:45:38.988902 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2dfkj_c2621562-4c91-40a3-ad72-29d325404496/ovnkube-controller/2.log" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.059555 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.059596 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.059628 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.059649 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.059665 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:39Z","lastTransitionTime":"2025-12-13T03:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.163063 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.163144 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.163164 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.163186 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.163220 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:39Z","lastTransitionTime":"2025-12-13T03:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.266547 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.266653 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.266681 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.266731 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.266748 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:39Z","lastTransitionTime":"2025-12-13T03:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.370130 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.370175 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.370189 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.370208 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.370221 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:39Z","lastTransitionTime":"2025-12-13T03:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.472918 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.472963 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.472975 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.472994 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.473007 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:39Z","lastTransitionTime":"2025-12-13T03:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.578295 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.578344 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.578355 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.578375 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.578387 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:39Z","lastTransitionTime":"2025-12-13T03:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.615789 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:39 crc kubenswrapper[4766]: E1213 03:45:39.615968 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.640938 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd1244423f54b79ded5573e82039b04eef50bf8c9c9dcac8891d356a3933358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fb0d2712c18bfa5fe0348e7f0506275e7a452b99e3efe9fd16632831452ceed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-13T03:45:18Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:117\\\\nI1213 03:45:17.788044 6207 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1213 03:45:17.788725 6207 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1213 03:45:17.788754 6207 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1213 03:45:17.788763 6207 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1213 03:45:17.788775 6207 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1213 03:45:17.788780 6207 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1213 03:45:17.788816 6207 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1213 03:45:17.788856 6207 factory.go:656] Stopping watch factory\\\\nI1213 03:45:17.788874 6207 ovnkube.go:599] Stopped ovnkube\\\\nI1213 03:45:17.789027 6207 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1213 03:45:17.789038 6207 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1213 03:45:17.789044 6207 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1213 03:45:17.789050 6207 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1213 03:45:17.789056 6207 handler.go:208] Removed *v1.Node event handler 2\\\\nI1213 03:45:17.789063 6207 handler.go:208] Removed *v1.Node event handler 7\\\\nI1213 03:45:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd1244423f54b79ded5573e82039b04eef50bf8c9c9dcac8891d356a3933358\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-13T03:45:37Z\\\",\\\"message\\\":\\\"removal\\\\nI1213 03:45:37.078702 6548 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1213 03:45:37.078741 6548 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1213 03:45:37.078794 6548 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1213 03:45:37.078804 6548 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1213 03:45:37.078843 6548 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1213 03:45:37.078856 6548 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1213 03:45:37.078863 6548 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1213 03:45:37.078870 6548 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1213 03:45:37.078875 6548 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1213 03:45:37.078990 6548 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1213 03:45:37.079038 6548 factory.go:656] Stopping watch factory\\\\nI1213 03:45:37.079051 6548 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1213 03:45:37.079091 6548 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:39Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.659066 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb2779e9-fd47-4ea6-a75c-b0c24339b1c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c82736d8d5e400aa9f77b6e3481b75b3127380fbfe2ab6b998360b771a41dc8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T03:44:53Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 03:44:52.250326 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 03:44:52.250536 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 03:44:52.251699 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1102693067/tls.crt::/tmp/serving-cert-1102693067/tls.key\\\\\\\"\\\\nI1213 03:44:52.913338 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 03:44:52.918333 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 03:44:52.918373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 03:44:52.918488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 03:44:52.918505 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 03:44:52.925473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 03:44:52.925500 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925505 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 03:44:52.925512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 03:44:52.925515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 03:44:52.925518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 03:44:52.926233 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 03:44:52.929662 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:39Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.673456 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:39Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.681343 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.681372 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.681381 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.681398 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.681412 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:39Z","lastTransitionTime":"2025-12-13T03:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.686653 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:39Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.700050 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:39Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.714213 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:39Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.727840 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:39Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.743526 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7587993e5f7e3b62e3eb1456ca64e857ba0e4daf15fdea9c546c59f50f853517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:39Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.757587 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvxrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84c9636d-a525-40e8-bc35-af07ecbdeafc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvxrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:39Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.772921 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c29363fc-e8a6-40d5-a73f-4bac6b47f073\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2c24d5ee6235813ce84ab1be241b9812df0541d5caefec1daa25ff912d31e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a615f91bcd70b5108f8a100a56aba2d311a265d13e8d2468f56720d88cab3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467ab5b59595a4d878501ba9fd0d98d3fbee40937722bf79484c10d1bf230253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81bcb7b38fd64f78f1d79a333a8ec77d630c276d966dd603b39b0f3eaa14310d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81bcb7b38fd64f78f1d79a333a8ec77d630c276d966dd603b39b0f3eaa14310d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:39Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.784011 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.784195 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.784213 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.784233 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.784246 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:39Z","lastTransitionTime":"2025-12-13T03:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.785983 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:39Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.801571 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:39Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.816605 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:39Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.833109 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:39Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.849050 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:39Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.861384 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:39Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.877583 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b40427c-c501-4c74-a7e3-2e6f1343bc03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a79778b7b5de56ba93a75c81deac1a836df2ffb1b94c804d0b630800663dd443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://455382a79dfb5c2f3d7ad8fef604af58cc2b4ff33c3c31ffe0961205c4368b23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ww5fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:39Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.887831 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.887891 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.887904 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.887924 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.887934 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:39Z","lastTransitionTime":"2025-12-13T03:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.990971 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.991397 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.991409 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.991448 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:39 crc kubenswrapper[4766]: I1213 03:45:39.991462 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:39Z","lastTransitionTime":"2025-12-13T03:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.094152 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.094243 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.094261 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.094289 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.094308 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:40Z","lastTransitionTime":"2025-12-13T03:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.197842 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.197896 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.197905 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.197923 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.198177 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:40Z","lastTransitionTime":"2025-12-13T03:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.355648 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.355708 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.355725 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.355747 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.355761 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:40Z","lastTransitionTime":"2025-12-13T03:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.458217 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.458271 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.458284 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.458304 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.458322 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:40Z","lastTransitionTime":"2025-12-13T03:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.565109 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.565197 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.565215 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.565252 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.565267 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:40Z","lastTransitionTime":"2025-12-13T03:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.615766 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:40 crc kubenswrapper[4766]: E1213 03:45:40.616015 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.616316 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:40 crc kubenswrapper[4766]: E1213 03:45:40.616400 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.616598 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:40 crc kubenswrapper[4766]: E1213 03:45:40.616678 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.667827 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.667865 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.667878 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.667897 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.667910 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:40Z","lastTransitionTime":"2025-12-13T03:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.771652 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.771705 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.771718 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.771736 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.771748 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:40Z","lastTransitionTime":"2025-12-13T03:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.874383 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.874495 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.874517 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.874544 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.874558 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:40Z","lastTransitionTime":"2025-12-13T03:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.977553 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.977595 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.977605 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.977623 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:40 crc kubenswrapper[4766]: I1213 03:45:40.977634 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:40Z","lastTransitionTime":"2025-12-13T03:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.080548 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.080582 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.080591 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.080609 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.080621 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:41Z","lastTransitionTime":"2025-12-13T03:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.183525 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.183579 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.183590 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.183608 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.183618 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:41Z","lastTransitionTime":"2025-12-13T03:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.287200 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.287288 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.287299 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.287339 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.287358 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:41Z","lastTransitionTime":"2025-12-13T03:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.390085 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.390123 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.390134 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.390152 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.390163 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:41Z","lastTransitionTime":"2025-12-13T03:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.493055 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.493098 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.493108 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.493126 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.493137 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:41Z","lastTransitionTime":"2025-12-13T03:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.596685 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.596742 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.596755 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.596776 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.596792 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:41Z","lastTransitionTime":"2025-12-13T03:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.615501 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:41 crc kubenswrapper[4766]: E1213 03:45:41.615760 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.699512 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.699562 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.699578 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.699600 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.699615 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:41Z","lastTransitionTime":"2025-12-13T03:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.802634 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.802683 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.802696 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.802716 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.802731 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:41Z","lastTransitionTime":"2025-12-13T03:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.906686 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.906751 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.906766 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.906787 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:41 crc kubenswrapper[4766]: I1213 03:45:41.906801 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:41Z","lastTransitionTime":"2025-12-13T03:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.009634 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.009690 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.009704 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.009724 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.009785 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:42Z","lastTransitionTime":"2025-12-13T03:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.112156 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.112190 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.112199 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.112214 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.112224 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:42Z","lastTransitionTime":"2025-12-13T03:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.215500 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.215600 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.215614 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.215635 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.215839 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:42Z","lastTransitionTime":"2025-12-13T03:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.319764 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.319824 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.319838 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.319860 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.319875 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:42Z","lastTransitionTime":"2025-12-13T03:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.423343 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.423405 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.423418 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.423459 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.423474 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:42Z","lastTransitionTime":"2025-12-13T03:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.526988 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.527053 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.527069 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.527089 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.527100 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:42Z","lastTransitionTime":"2025-12-13T03:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.580477 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84c9636d-a525-40e8-bc35-af07ecbdeafc-metrics-certs\") pod \"network-metrics-daemon-qvxrm\" (UID: \"84c9636d-a525-40e8-bc35-af07ecbdeafc\") " pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:42 crc kubenswrapper[4766]: E1213 03:45:42.580775 4766 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 03:45:42 crc kubenswrapper[4766]: E1213 03:45:42.580889 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84c9636d-a525-40e8-bc35-af07ecbdeafc-metrics-certs podName:84c9636d-a525-40e8-bc35-af07ecbdeafc nodeName:}" failed. No retries permitted until 2025-12-13 03:46:14.580842458 +0000 UTC m=+106.090775412 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84c9636d-a525-40e8-bc35-af07ecbdeafc-metrics-certs") pod "network-metrics-daemon-qvxrm" (UID: "84c9636d-a525-40e8-bc35-af07ecbdeafc") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.615187 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:42 crc kubenswrapper[4766]: E1213 03:45:42.615347 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.615407 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.615406 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:42 crc kubenswrapper[4766]: E1213 03:45:42.615539 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:45:42 crc kubenswrapper[4766]: E1213 03:45:42.615725 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.629825 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.629880 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.629893 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.629917 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.629930 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:42Z","lastTransitionTime":"2025-12-13T03:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.733572 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.733620 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.733635 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.733658 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.733674 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:42Z","lastTransitionTime":"2025-12-13T03:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.836715 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.836751 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.836760 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.836776 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.836787 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:42Z","lastTransitionTime":"2025-12-13T03:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.939976 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.940029 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.940041 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.940061 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:42 crc kubenswrapper[4766]: I1213 03:45:42.940077 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:42Z","lastTransitionTime":"2025-12-13T03:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.043825 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.043892 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.043910 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.043950 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.043965 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:43Z","lastTransitionTime":"2025-12-13T03:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.146700 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.146755 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.146765 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.146786 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.146799 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:43Z","lastTransitionTime":"2025-12-13T03:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.250109 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.250202 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.250215 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.250237 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.250252 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:43Z","lastTransitionTime":"2025-12-13T03:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.352666 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.352711 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.352723 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.352743 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.352754 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:43Z","lastTransitionTime":"2025-12-13T03:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.456255 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.456346 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.456393 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.456421 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.456490 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:43Z","lastTransitionTime":"2025-12-13T03:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.559753 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.559841 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.559856 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.559876 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.559911 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:43Z","lastTransitionTime":"2025-12-13T03:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.615764 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:43 crc kubenswrapper[4766]: E1213 03:45:43.616034 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.663158 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.663295 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.663380 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.663414 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.663498 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:43Z","lastTransitionTime":"2025-12-13T03:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.767286 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.767345 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.767358 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.767381 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.767397 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:43Z","lastTransitionTime":"2025-12-13T03:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.869730 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.869793 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.869804 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.869822 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.869834 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:43Z","lastTransitionTime":"2025-12-13T03:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.972255 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.972319 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.972334 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.972356 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:43 crc kubenswrapper[4766]: I1213 03:45:43.972370 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:43Z","lastTransitionTime":"2025-12-13T03:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.075774 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.075840 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.075851 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.075867 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.075882 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:44Z","lastTransitionTime":"2025-12-13T03:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.179289 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.179354 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.179371 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.179392 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.179405 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:44Z","lastTransitionTime":"2025-12-13T03:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.276204 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.276278 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.276292 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.276316 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.276332 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:44Z","lastTransitionTime":"2025-12-13T03:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:44 crc kubenswrapper[4766]: E1213 03:45:44.295361 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:44Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.301556 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.301632 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.301649 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.301695 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.301711 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:44Z","lastTransitionTime":"2025-12-13T03:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:44 crc kubenswrapper[4766]: E1213 03:45:44.317810 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:44Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.322696 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.322725 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.322735 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.322768 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.322780 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:44Z","lastTransitionTime":"2025-12-13T03:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:44 crc kubenswrapper[4766]: E1213 03:45:44.340995 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:44Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.346305 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.346379 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.346393 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.346413 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.346455 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:44Z","lastTransitionTime":"2025-12-13T03:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:44 crc kubenswrapper[4766]: E1213 03:45:44.363604 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:44Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.368663 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.368704 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.368715 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.368732 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.368743 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:44Z","lastTransitionTime":"2025-12-13T03:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:44 crc kubenswrapper[4766]: E1213 03:45:44.382971 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:44Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:44 crc kubenswrapper[4766]: E1213 03:45:44.383100 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.385172 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.385239 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.385252 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.385273 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.385287 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:44Z","lastTransitionTime":"2025-12-13T03:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.488004 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.488068 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.488083 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.488106 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.488131 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:44Z","lastTransitionTime":"2025-12-13T03:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.591694 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.591742 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.591753 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.591772 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.591785 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:44Z","lastTransitionTime":"2025-12-13T03:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.615673 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.615764 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.615851 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:44 crc kubenswrapper[4766]: E1213 03:45:44.615968 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:45:44 crc kubenswrapper[4766]: E1213 03:45:44.616120 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:45:44 crc kubenswrapper[4766]: E1213 03:45:44.616256 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.695218 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.695277 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.695291 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.695312 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.695330 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:44Z","lastTransitionTime":"2025-12-13T03:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.797992 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.798050 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.798066 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.798086 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.798102 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:44Z","lastTransitionTime":"2025-12-13T03:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.901849 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.901937 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.901956 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.901979 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:44 crc kubenswrapper[4766]: I1213 03:45:44.901992 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:44Z","lastTransitionTime":"2025-12-13T03:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.005686 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.005745 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.005755 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.005774 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.005794 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:45Z","lastTransitionTime":"2025-12-13T03:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.108715 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.108748 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.108759 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.108775 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.108786 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:45Z","lastTransitionTime":"2025-12-13T03:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.214153 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.214198 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.214211 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.214231 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.214244 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:45Z","lastTransitionTime":"2025-12-13T03:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.317422 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.317558 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.317568 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.317586 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.317596 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:45Z","lastTransitionTime":"2025-12-13T03:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.420393 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.420515 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.420534 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.420557 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.420572 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:45Z","lastTransitionTime":"2025-12-13T03:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.523743 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.523809 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.523821 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.523840 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.523851 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:45Z","lastTransitionTime":"2025-12-13T03:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.615628 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:45 crc kubenswrapper[4766]: E1213 03:45:45.615821 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.626624 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.626670 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.626684 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.626701 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.626711 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:45Z","lastTransitionTime":"2025-12-13T03:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.729016 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.729062 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.729075 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.729094 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.729108 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:45Z","lastTransitionTime":"2025-12-13T03:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.831834 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.831874 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.831885 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.831902 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.831914 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:45Z","lastTransitionTime":"2025-12-13T03:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.935152 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.935192 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.935201 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.935217 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:45 crc kubenswrapper[4766]: I1213 03:45:45.935228 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:45Z","lastTransitionTime":"2025-12-13T03:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.037344 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.037393 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.037406 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.037452 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.037466 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:46Z","lastTransitionTime":"2025-12-13T03:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.144734 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.144785 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.144798 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.144816 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.144828 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:46Z","lastTransitionTime":"2025-12-13T03:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.248561 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.248627 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.248643 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.248668 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.248681 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:46Z","lastTransitionTime":"2025-12-13T03:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.351887 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.351930 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.351942 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.351962 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.351974 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:46Z","lastTransitionTime":"2025-12-13T03:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.455984 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.456054 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.456073 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.456100 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.456116 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:46Z","lastTransitionTime":"2025-12-13T03:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.558848 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.558953 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.558966 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.558986 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.559006 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:46Z","lastTransitionTime":"2025-12-13T03:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.616194 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.616285 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.616338 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:46 crc kubenswrapper[4766]: E1213 03:45:46.616416 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:45:46 crc kubenswrapper[4766]: E1213 03:45:46.616523 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:45:46 crc kubenswrapper[4766]: E1213 03:45:46.616599 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.661887 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.661961 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.661976 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.661998 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.662015 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:46Z","lastTransitionTime":"2025-12-13T03:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.765349 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.765416 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.765464 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.765489 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.765506 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:46Z","lastTransitionTime":"2025-12-13T03:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.868949 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.869012 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.869036 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.869069 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.869092 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:46Z","lastTransitionTime":"2025-12-13T03:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.973793 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.973849 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.973865 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.973888 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:46 crc kubenswrapper[4766]: I1213 03:45:46.973901 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:46Z","lastTransitionTime":"2025-12-13T03:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.076394 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.076477 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.076533 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.076568 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.076584 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:47Z","lastTransitionTime":"2025-12-13T03:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.178966 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.179012 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.179023 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.179044 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.179056 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:47Z","lastTransitionTime":"2025-12-13T03:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.282215 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.282255 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.282265 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.282282 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.282293 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:47Z","lastTransitionTime":"2025-12-13T03:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.385330 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.385385 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.385401 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.385422 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.385467 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:47Z","lastTransitionTime":"2025-12-13T03:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.488340 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.488407 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.488443 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.488471 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.488486 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:47Z","lastTransitionTime":"2025-12-13T03:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.591570 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.591637 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.591654 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.591699 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.591715 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:47Z","lastTransitionTime":"2025-12-13T03:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.615380 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:47 crc kubenswrapper[4766]: E1213 03:45:47.615597 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.694401 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.694481 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.694495 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.694515 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.694529 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:47Z","lastTransitionTime":"2025-12-13T03:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.797408 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.797493 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.797508 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.797532 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.797548 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:47Z","lastTransitionTime":"2025-12-13T03:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.901339 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.901388 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.901399 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.901470 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:47 crc kubenswrapper[4766]: I1213 03:45:47.901483 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:47Z","lastTransitionTime":"2025-12-13T03:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.004323 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.004363 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.004382 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.004401 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.004416 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:48Z","lastTransitionTime":"2025-12-13T03:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.107864 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.107908 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.107918 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.107936 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.107947 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:48Z","lastTransitionTime":"2025-12-13T03:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.211186 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.211240 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.211253 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.211278 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.211292 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:48Z","lastTransitionTime":"2025-12-13T03:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.314305 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.314358 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.314372 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.314392 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.314407 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:48Z","lastTransitionTime":"2025-12-13T03:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.417488 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.417533 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.417545 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.417565 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.417577 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:48Z","lastTransitionTime":"2025-12-13T03:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.520497 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.520557 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.520569 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.520589 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.520603 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:48Z","lastTransitionTime":"2025-12-13T03:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.615825 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.615935 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:48 crc kubenswrapper[4766]: E1213 03:45:48.616031 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:45:48 crc kubenswrapper[4766]: E1213 03:45:48.616113 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.616342 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:48 crc kubenswrapper[4766]: E1213 03:45:48.616638 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.623906 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.623937 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.623947 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.623961 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.623970 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:48Z","lastTransitionTime":"2025-12-13T03:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.727077 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.727149 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.727161 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.727179 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.727190 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:48Z","lastTransitionTime":"2025-12-13T03:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.829773 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.829809 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.829818 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.829832 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.829844 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:48Z","lastTransitionTime":"2025-12-13T03:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.932607 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.932919 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.933027 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.933121 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:48 crc kubenswrapper[4766]: I1213 03:45:48.933198 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:48Z","lastTransitionTime":"2025-12-13T03:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.035452 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.035517 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.035532 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.035554 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.035568 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:49Z","lastTransitionTime":"2025-12-13T03:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.139672 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.140075 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.140298 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.140445 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.140560 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:49Z","lastTransitionTime":"2025-12-13T03:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.243030 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.243452 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.243571 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.243722 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.243827 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:49Z","lastTransitionTime":"2025-12-13T03:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.347308 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.347991 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.348086 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.348175 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.348259 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:49Z","lastTransitionTime":"2025-12-13T03:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.450848 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.450896 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.450908 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.450927 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.450940 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:49Z","lastTransitionTime":"2025-12-13T03:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.553203 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.553249 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.553260 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.553278 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.553291 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:49Z","lastTransitionTime":"2025-12-13T03:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.615581 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:49 crc kubenswrapper[4766]: E1213 03:45:49.615754 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.634776 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb2779e9-fd47-4ea6-a75c-b0c24339b1c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c82736d8d5e400aa9f77b6e3481b75b3127380fbfe2ab6b998360b771a41dc8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T03:44:53Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 03:44:52.250326 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 03:44:52.250536 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 03:44:52.251699 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1102693067/tls.crt::/tmp/serving-cert-1102693067/tls.key\\\\\\\"\\\\nI1213 03:44:52.913338 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 03:44:52.918333 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 03:44:52.918373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 03:44:52.918488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 03:44:52.918505 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 03:44:52.925473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 03:44:52.925500 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925505 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 03:44:52.925512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 03:44:52.925515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 03:44:52.925518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 03:44:52.926233 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 03:44:52.929662 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:49Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.649821 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:49Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.655493 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.655530 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.655546 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.655567 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.655580 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:49Z","lastTransitionTime":"2025-12-13T03:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.664412 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:49Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.679173 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:49Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.692922 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:49Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.738642 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:49Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.758785 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.759099 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.759190 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.759271 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.759401 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:49Z","lastTransitionTime":"2025-12-13T03:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.762129 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd1244423f54b79ded5573e82039b04eef50bf8c9c9dcac8891d356a3933358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fb0d2712c18bfa5fe0348e7f0506275e7a452b99e3efe9fd16632831452ceed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-13T03:45:18Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:117\\\\nI1213 03:45:17.788044 6207 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1213 03:45:17.788725 6207 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1213 03:45:17.788754 6207 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1213 03:45:17.788763 6207 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1213 03:45:17.788775 6207 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1213 03:45:17.788780 6207 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1213 03:45:17.788816 6207 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1213 03:45:17.788856 6207 factory.go:656] Stopping watch factory\\\\nI1213 03:45:17.788874 6207 ovnkube.go:599] Stopped ovnkube\\\\nI1213 03:45:17.789027 6207 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1213 03:45:17.789038 6207 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1213 03:45:17.789044 6207 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1213 03:45:17.789050 6207 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1213 03:45:17.789056 6207 handler.go:208] Removed *v1.Node event handler 2\\\\nI1213 03:45:17.789063 6207 handler.go:208] Removed *v1.Node event handler 7\\\\nI1213 03:45:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd1244423f54b79ded5573e82039b04eef50bf8c9c9dcac8891d356a3933358\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-13T03:45:37Z\\\",\\\"message\\\":\\\"removal\\\\nI1213 03:45:37.078702 6548 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1213 03:45:37.078741 6548 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1213 03:45:37.078794 6548 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1213 03:45:37.078804 6548 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1213 03:45:37.078843 6548 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1213 03:45:37.078856 6548 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1213 03:45:37.078863 6548 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1213 03:45:37.078870 6548 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1213 03:45:37.078875 6548 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1213 03:45:37.078990 6548 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1213 03:45:37.079038 6548 factory.go:656] Stopping watch factory\\\\nI1213 03:45:37.079051 6548 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1213 03:45:37.079091 6548 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:49Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.781089 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7587993e5f7e3b62e3eb1456ca64e857ba0e4daf15fdea9c546c59f50f853517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:49Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.793817 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvxrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84c9636d-a525-40e8-bc35-af07ecbdeafc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvxrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:49Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.808187 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c29363fc-e8a6-40d5-a73f-4bac6b47f073\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2c24d5ee6235813ce84ab1be241b9812df0541d5caefec1daa25ff912d31e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a615f91bcd70b5108f8a100a56aba2d311a265d13e8d2468f56720d88cab3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467ab5b59595a4d878501ba9fd0d98d3fbee40937722bf79484c10d1bf230253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81bcb7b38fd64f78f1d79a333a8ec77d630c276d966dd603b39b0f3eaa14310d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81bcb7b38fd64f78f1d79a333a8ec77d630c276d966dd603b39b0f3eaa14310d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:49Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.822898 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:49Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.835686 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:49Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.850153 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:49Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.862308 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.862345 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.862357 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.862373 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.862384 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:49Z","lastTransitionTime":"2025-12-13T03:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.863214 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:49Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.878971 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:49Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.891523 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:49Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.904132 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b40427c-c501-4c74-a7e3-2e6f1343bc03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a79778b7b5de56ba93a75c81deac1a836df2ffb1b94c804d0b630800663dd443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://455382a79dfb5c2f3d7ad8fef604af58cc2b4ff33c3c31ffe0961205c4368b23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ww5fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:49Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.965098 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.965416 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.965534 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.965614 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:49 crc kubenswrapper[4766]: I1213 03:45:49.965692 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:49Z","lastTransitionTime":"2025-12-13T03:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.069043 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.069115 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.069129 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.069170 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.069182 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:50Z","lastTransitionTime":"2025-12-13T03:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.172091 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.172147 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.172160 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.172183 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.172196 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:50Z","lastTransitionTime":"2025-12-13T03:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.274804 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.274848 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.274857 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.274874 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.274885 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:50Z","lastTransitionTime":"2025-12-13T03:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.377193 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.377289 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.377405 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.377606 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.377662 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:50Z","lastTransitionTime":"2025-12-13T03:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.481228 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.481291 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.481309 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.481333 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.481350 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:50Z","lastTransitionTime":"2025-12-13T03:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.584841 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.584882 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.584894 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.584911 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.584922 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:50Z","lastTransitionTime":"2025-12-13T03:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.615879 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.615966 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.615946 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:50 crc kubenswrapper[4766]: E1213 03:45:50.616152 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:45:50 crc kubenswrapper[4766]: E1213 03:45:50.616971 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:45:50 crc kubenswrapper[4766]: E1213 03:45:50.617075 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.617834 4766 scope.go:117] "RemoveContainer" containerID="0bd1244423f54b79ded5573e82039b04eef50bf8c9c9dcac8891d356a3933358" Dec 13 03:45:50 crc kubenswrapper[4766]: E1213 03:45:50.618068 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2dfkj_openshift-ovn-kubernetes(c2621562-4c91-40a3-ad72-29d325404496)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" podUID="c2621562-4c91-40a3-ad72-29d325404496" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.643675 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:50Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.660006 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:50Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.680040 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c29363fc-e8a6-40d5-a73f-4bac6b47f073\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2c24d5ee6235813ce84ab1be241b9812df0541d5caefec1daa25ff912d31e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a615f91bcd70b5108f8a100a56aba2d311a265d13e8d2468f56720d88cab3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467ab5b59595a4d878501ba9fd0d98d3fbee40937722bf79484c10d1bf230253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81bcb7b38fd64f78f1d79a333a8ec77d630c276d966dd603b39b0f3eaa14310d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81bcb7b38fd64f78f1d79a333a8ec77d630c276d966dd603b39b0f3eaa14310d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:50Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.687563 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.687614 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.687624 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.687640 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.687651 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:50Z","lastTransitionTime":"2025-12-13T03:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.702904 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:50Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.719192 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:50Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.739412 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:50Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.751409 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:50Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.773779 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b40427c-c501-4c74-a7e3-2e6f1343bc03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a79778b7b5de56ba93a75c81deac1a836df2ffb1b94c804d0b630800663dd443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://455382a79dfb5c2f3d7ad8fef604af58cc2b4ff33c3c31ffe0961205c4368b23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ww5fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:50Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.790728 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.790770 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.790780 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.790797 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.790808 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:50Z","lastTransitionTime":"2025-12-13T03:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.791031 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:50Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.808365 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:50Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.825257 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:50Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.839958 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:50Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.854811 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:50Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.877441 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd1244423f54b79ded5573e82039b04eef50bf8c9c9dcac8891d356a3933358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd1244423f54b79ded5573e82039b04eef50bf8c9c9dcac8891d356a3933358\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-13T03:45:37Z\\\",\\\"message\\\":\\\"removal\\\\nI1213 03:45:37.078702 6548 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1213 03:45:37.078741 6548 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1213 03:45:37.078794 6548 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1213 03:45:37.078804 6548 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1213 03:45:37.078843 6548 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1213 03:45:37.078856 6548 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1213 03:45:37.078863 6548 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1213 03:45:37.078870 6548 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1213 03:45:37.078875 6548 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1213 03:45:37.078990 6548 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1213 03:45:37.079038 6548 factory.go:656] Stopping watch factory\\\\nI1213 03:45:37.079051 6548 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1213 03:45:37.079091 6548 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2dfkj_openshift-ovn-kubernetes(c2621562-4c91-40a3-ad72-29d325404496)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:50Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.891928 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb2779e9-fd47-4ea6-a75c-b0c24339b1c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c82736d8d5e400aa9f77b6e3481b75b3127380fbfe2ab6b998360b771a41dc8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T03:44:53Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 03:44:52.250326 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 03:44:52.250536 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 03:44:52.251699 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1102693067/tls.crt::/tmp/serving-cert-1102693067/tls.key\\\\\\\"\\\\nI1213 03:44:52.913338 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 03:44:52.918333 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 03:44:52.918373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 03:44:52.918488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 03:44:52.918505 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 03:44:52.925473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 03:44:52.925500 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925505 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 03:44:52.925512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 03:44:52.925515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 03:44:52.925518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 03:44:52.926233 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 03:44:52.929662 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:50Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.893355 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.893395 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.893464 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.893488 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.893503 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:50Z","lastTransitionTime":"2025-12-13T03:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.905682 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvxrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84c9636d-a525-40e8-bc35-af07ecbdeafc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvxrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:50Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.942313 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7587993e5f7e3b62e3eb1456ca64e857ba0e4daf15fdea9c546c59f50f853517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:50Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.996608 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.996654 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.996668 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.996686 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:50 crc kubenswrapper[4766]: I1213 03:45:50.996700 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:50Z","lastTransitionTime":"2025-12-13T03:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.099603 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.099665 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.099681 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.099702 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.099716 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:51Z","lastTransitionTime":"2025-12-13T03:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.201697 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.201899 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.202072 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.202271 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.202487 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:51Z","lastTransitionTime":"2025-12-13T03:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.305997 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.306378 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.306527 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.306629 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.306738 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:51Z","lastTransitionTime":"2025-12-13T03:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.409875 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.410554 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.410671 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.410798 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.410883 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:51Z","lastTransitionTime":"2025-12-13T03:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.513862 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.514316 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.514481 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.514578 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.514663 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:51Z","lastTransitionTime":"2025-12-13T03:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.616101 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:51 crc kubenswrapper[4766]: E1213 03:45:51.616307 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.617632 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.617658 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.617667 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.617685 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.617695 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:51Z","lastTransitionTime":"2025-12-13T03:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.727847 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.727901 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.727911 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.727929 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.727939 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:51Z","lastTransitionTime":"2025-12-13T03:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.831443 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.831488 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.831500 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.831521 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.831535 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:51Z","lastTransitionTime":"2025-12-13T03:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.934247 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.934305 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.934357 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.934377 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:51 crc kubenswrapper[4766]: I1213 03:45:51.934389 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:51Z","lastTransitionTime":"2025-12-13T03:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.037559 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.037601 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.037611 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.037627 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.037638 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:52Z","lastTransitionTime":"2025-12-13T03:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.048738 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6n4vc_b724d1e1-9ded-434e-b852-f5233f27ef32/kube-multus/0.log" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.048816 4766 generic.go:334] "Generic (PLEG): container finished" podID="b724d1e1-9ded-434e-b852-f5233f27ef32" containerID="ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0" exitCode=1 Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.048899 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6n4vc" event={"ID":"b724d1e1-9ded-434e-b852-f5233f27ef32","Type":"ContainerDied","Data":"ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0"} Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.050015 4766 scope.go:117] "RemoveContainer" containerID="ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.068899 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c29363fc-e8a6-40d5-a73f-4bac6b47f073\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2c24d5ee6235813ce84ab1be241b9812df0541d5caefec1daa25ff912d31e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a615f91bcd70b5108f8a100a56aba2d311a265d13e8d2468f56720d88cab3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467ab5b59595a4d878501ba9fd0d98d3fbee40937722bf79484c10d1bf230253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81bcb7b38fd64f78f1d79a333a8ec77d630c276d966dd603b39b0f3eaa14310d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81bcb7b38fd64f78f1d79a333a8ec77d630c276d966dd603b39b0f3eaa14310d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:52Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.086280 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:52Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.102358 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:52Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.116464 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b40427c-c501-4c74-a7e3-2e6f1343bc03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a79778b7b5de56ba93a75c81deac1a836df2ffb1b94c804d0b630800663dd443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://455382a79dfb5c2f3d7ad8fef604af58cc2b4ff33c3c31ffe0961205c4368b23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ww5fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:52Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.130670 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:52Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.140536 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.140575 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.140587 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.140607 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.140619 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:52Z","lastTransitionTime":"2025-12-13T03:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.148033 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:52Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.167579 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-13T03:45:51Z\\\",\\\"message\\\":\\\"2025-12-13T03:45:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5a02f7ef-fa69-4aca-9147-cbc0b38dbb8a\\\\n2025-12-13T03:45:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5a02f7ef-fa69-4aca-9147-cbc0b38dbb8a to /host/opt/cni/bin/\\\\n2025-12-13T03:45:05Z [verbose] multus-daemon started\\\\n2025-12-13T03:45:05Z [verbose] Readiness Indicator file check\\\\n2025-12-13T03:45:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:52Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.180564 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:52Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.195690 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:52Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.210580 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:52Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.238714 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd1244423f54b79ded5573e82039b04eef50bf8c9c9dcac8891d356a3933358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd1244423f54b79ded5573e82039b04eef50bf8c9c9dcac8891d356a3933358\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-13T03:45:37Z\\\",\\\"message\\\":\\\"removal\\\\nI1213 03:45:37.078702 6548 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1213 03:45:37.078741 6548 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1213 03:45:37.078794 6548 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1213 03:45:37.078804 6548 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1213 03:45:37.078843 6548 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1213 03:45:37.078856 6548 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1213 03:45:37.078863 6548 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1213 03:45:37.078870 6548 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1213 03:45:37.078875 6548 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1213 03:45:37.078990 6548 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1213 03:45:37.079038 6548 factory.go:656] Stopping watch factory\\\\nI1213 03:45:37.079051 6548 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1213 03:45:37.079091 6548 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2dfkj_openshift-ovn-kubernetes(c2621562-4c91-40a3-ad72-29d325404496)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:52Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.245557 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.245597 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.245606 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.245624 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.245635 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:52Z","lastTransitionTime":"2025-12-13T03:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.256689 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb2779e9-fd47-4ea6-a75c-b0c24339b1c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c82736d8d5e400aa9f77b6e3481b75b3127380fbfe2ab6b998360b771a41dc8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T03:44:53Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 03:44:52.250326 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 03:44:52.250536 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 03:44:52.251699 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1102693067/tls.crt::/tmp/serving-cert-1102693067/tls.key\\\\\\\"\\\\nI1213 03:44:52.913338 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 03:44:52.918333 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 03:44:52.918373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 03:44:52.918488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 03:44:52.918505 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 03:44:52.925473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 03:44:52.925500 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925505 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 03:44:52.925512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 03:44:52.925515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 03:44:52.925518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 03:44:52.926233 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 03:44:52.929662 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:52Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.280985 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:52Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.299256 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:52Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.314878 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:52Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.332041 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7587993e5f7e3b62e3eb1456ca64e857ba0e4daf15fdea9c546c59f50f853517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:52Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.348324 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvxrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84c9636d-a525-40e8-bc35-af07ecbdeafc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvxrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:52Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.350395 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.350475 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.350500 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.350524 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.350536 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:52Z","lastTransitionTime":"2025-12-13T03:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.454879 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.454941 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.454955 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.454975 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.454986 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:52Z","lastTransitionTime":"2025-12-13T03:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.558181 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.558231 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.558243 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.558262 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.558273 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:52Z","lastTransitionTime":"2025-12-13T03:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.615184 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.615238 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.615330 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:52 crc kubenswrapper[4766]: E1213 03:45:52.615364 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:45:52 crc kubenswrapper[4766]: E1213 03:45:52.615484 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:45:52 crc kubenswrapper[4766]: E1213 03:45:52.615660 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.662070 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.662120 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.662129 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.662149 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.662162 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:52Z","lastTransitionTime":"2025-12-13T03:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.766348 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.766419 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.766458 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.766490 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.766504 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:52Z","lastTransitionTime":"2025-12-13T03:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.870339 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.870396 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.870410 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.870441 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.870452 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:52Z","lastTransitionTime":"2025-12-13T03:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.973898 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.973997 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.974018 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.974050 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:52 crc kubenswrapper[4766]: I1213 03:45:52.974070 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:52Z","lastTransitionTime":"2025-12-13T03:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.055319 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6n4vc_b724d1e1-9ded-434e-b852-f5233f27ef32/kube-multus/0.log" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.055377 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6n4vc" event={"ID":"b724d1e1-9ded-434e-b852-f5233f27ef32","Type":"ContainerStarted","Data":"40ada8692e77f3fdcfb7af0831f398beba03791a2e7286360296b007c809a329"} Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.072591 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c29363fc-e8a6-40d5-a73f-4bac6b47f073\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2c24d5ee6235813ce84ab1be241b9812df0541d5caefec1daa25ff912d31e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a615f91bcd70b5108f8a100a56aba2d311a265d13e8d2468f56720d88cab3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467ab5b59595a4d878501ba9fd0d98d3fbee40937722bf79484c10d1bf230253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81bcb7b38fd64f78f1d79a333a8ec77d630c276d966dd603b39b0f3eaa14310d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81bcb7b38fd64f78f1d79a333a8ec77d630c276d966dd603b39b0f3eaa14310d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:53Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.078641 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.078763 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.078798 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.078836 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.078871 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:53Z","lastTransitionTime":"2025-12-13T03:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.098213 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:53Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.115046 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:53Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.131652 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:53Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.145990 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:53Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.160291 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ada8692e77f3fdcfb7af0831f398beba03791a2e7286360296b007c809a329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-13T03:45:51Z\\\",\\\"message\\\":\\\"2025-12-13T03:45:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5a02f7ef-fa69-4aca-9147-cbc0b38dbb8a\\\\n2025-12-13T03:45:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5a02f7ef-fa69-4aca-9147-cbc0b38dbb8a to /host/opt/cni/bin/\\\\n2025-12-13T03:45:05Z [verbose] multus-daemon started\\\\n2025-12-13T03:45:05Z [verbose] Readiness Indicator file check\\\\n2025-12-13T03:45:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:53Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.176588 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:53Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.181787 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.181824 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.181837 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.181857 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.181873 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:53Z","lastTransitionTime":"2025-12-13T03:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.191445 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b40427c-c501-4c74-a7e3-2e6f1343bc03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a79778b7b5de56ba93a75c81deac1a836df2ffb1b94c804d0b630800663dd443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://455382a79dfb5c2f3d7ad8fef604af58cc2b4ff33c3c31ffe0961205c4368b23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ww5fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:53Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.212402 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb2779e9-fd47-4ea6-a75c-b0c24339b1c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c82736d8d5e400aa9f77b6e3481b75b3127380fbfe2ab6b998360b771a41dc8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T03:44:53Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 03:44:52.250326 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 03:44:52.250536 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 03:44:52.251699 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1102693067/tls.crt::/tmp/serving-cert-1102693067/tls.key\\\\\\\"\\\\nI1213 03:44:52.913338 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 03:44:52.918333 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 03:44:52.918373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 03:44:52.918488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 03:44:52.918505 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 03:44:52.925473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 03:44:52.925500 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925505 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 03:44:52.925512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 03:44:52.925515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 03:44:52.925518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 03:44:52.926233 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 03:44:52.929662 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:53Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.230648 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:53Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.248105 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:53Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.261955 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:53Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.281475 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:53Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.285356 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.285393 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.285408 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.285456 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.285479 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:53Z","lastTransitionTime":"2025-12-13T03:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.295258 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:53Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.322238 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd1244423f54b79ded5573e82039b04eef50bf8c9c9dcac8891d356a3933358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd1244423f54b79ded5573e82039b04eef50bf8c9c9dcac8891d356a3933358\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-13T03:45:37Z\\\",\\\"message\\\":\\\"removal\\\\nI1213 03:45:37.078702 6548 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1213 03:45:37.078741 6548 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1213 03:45:37.078794 6548 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1213 03:45:37.078804 6548 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1213 03:45:37.078843 6548 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1213 03:45:37.078856 6548 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1213 03:45:37.078863 6548 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1213 03:45:37.078870 6548 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1213 03:45:37.078875 6548 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1213 03:45:37.078990 6548 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1213 03:45:37.079038 6548 factory.go:656] Stopping watch factory\\\\nI1213 03:45:37.079051 6548 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1213 03:45:37.079091 6548 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2dfkj_openshift-ovn-kubernetes(c2621562-4c91-40a3-ad72-29d325404496)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:53Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.343846 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7587993e5f7e3b62e3eb1456ca64e857ba0e4daf15fdea9c546c59f50f853517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:53Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.358634 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvxrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84c9636d-a525-40e8-bc35-af07ecbdeafc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvxrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:53Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.389205 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.389293 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.389320 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.389354 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.389379 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:53Z","lastTransitionTime":"2025-12-13T03:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.492908 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.492979 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.492998 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.493025 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.493043 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:53Z","lastTransitionTime":"2025-12-13T03:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.596727 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.596799 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.596825 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.596849 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.596861 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:53Z","lastTransitionTime":"2025-12-13T03:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.615472 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:53 crc kubenswrapper[4766]: E1213 03:45:53.615667 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.701197 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.701249 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.701263 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.701286 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.701328 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:53Z","lastTransitionTime":"2025-12-13T03:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.804832 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.804900 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.804914 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.804938 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.804950 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:53Z","lastTransitionTime":"2025-12-13T03:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.908176 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.908245 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.908266 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.908295 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:53 crc kubenswrapper[4766]: I1213 03:45:53.908313 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:53Z","lastTransitionTime":"2025-12-13T03:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.013365 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.014126 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.014165 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.014241 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.014261 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:54Z","lastTransitionTime":"2025-12-13T03:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.117187 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.117263 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.117311 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.117332 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.117344 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:54Z","lastTransitionTime":"2025-12-13T03:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.221012 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.221084 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.221097 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.221123 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.221137 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:54Z","lastTransitionTime":"2025-12-13T03:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.324123 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.324174 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.324196 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.324214 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.324226 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:54Z","lastTransitionTime":"2025-12-13T03:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.428354 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.428492 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.428521 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.428550 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.428574 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:54Z","lastTransitionTime":"2025-12-13T03:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.532354 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.532467 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.532496 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.532534 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.532562 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:54Z","lastTransitionTime":"2025-12-13T03:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.562046 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.562131 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.562156 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.562189 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.562210 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:54Z","lastTransitionTime":"2025-12-13T03:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:54 crc kubenswrapper[4766]: E1213 03:45:54.587546 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:54Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.593049 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.593184 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.593205 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.593233 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.593252 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:54Z","lastTransitionTime":"2025-12-13T03:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:54 crc kubenswrapper[4766]: E1213 03:45:54.612786 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:54Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.615718 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.615739 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.615784 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:54 crc kubenswrapper[4766]: E1213 03:45:54.615911 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:45:54 crc kubenswrapper[4766]: E1213 03:45:54.616195 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:45:54 crc kubenswrapper[4766]: E1213 03:45:54.616262 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.618985 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.619116 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.619198 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.619317 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.619397 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:54Z","lastTransitionTime":"2025-12-13T03:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:54 crc kubenswrapper[4766]: E1213 03:45:54.635934 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:54Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.642317 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.642359 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.642375 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.642395 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.642408 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:54Z","lastTransitionTime":"2025-12-13T03:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:54 crc kubenswrapper[4766]: E1213 03:45:54.659768 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:54Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.664520 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.664576 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.664594 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.664623 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.664641 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:54Z","lastTransitionTime":"2025-12-13T03:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:54 crc kubenswrapper[4766]: E1213 03:45:54.686311 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:54Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:54 crc kubenswrapper[4766]: E1213 03:45:54.686702 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.689321 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.689388 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.689415 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.689487 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.689521 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:54Z","lastTransitionTime":"2025-12-13T03:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.792681 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.792772 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.792796 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.792827 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.792855 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:54Z","lastTransitionTime":"2025-12-13T03:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.895714 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.895786 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.895808 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.895837 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.895852 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:54Z","lastTransitionTime":"2025-12-13T03:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.999253 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.999754 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.999857 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:54 crc kubenswrapper[4766]: I1213 03:45:54.999932 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.000000 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:54Z","lastTransitionTime":"2025-12-13T03:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.104370 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.104467 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.104482 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.104506 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.104520 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:55Z","lastTransitionTime":"2025-12-13T03:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.210830 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.211499 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.211672 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.211862 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.212055 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:55Z","lastTransitionTime":"2025-12-13T03:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.316109 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.316187 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.316207 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.316234 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.316252 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:55Z","lastTransitionTime":"2025-12-13T03:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.420520 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.420593 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.420636 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.420673 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.420697 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:55Z","lastTransitionTime":"2025-12-13T03:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.525213 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.525286 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.525309 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.525341 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.525360 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:55Z","lastTransitionTime":"2025-12-13T03:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.615566 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:55 crc kubenswrapper[4766]: E1213 03:45:55.615821 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.629492 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.629547 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.629558 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.629576 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.629589 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:55Z","lastTransitionTime":"2025-12-13T03:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.732796 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.732856 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.732876 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.732898 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.732911 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:55Z","lastTransitionTime":"2025-12-13T03:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.837122 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.837215 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.837240 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.837268 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.837287 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:55Z","lastTransitionTime":"2025-12-13T03:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.941384 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.941473 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.941487 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.941512 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:55 crc kubenswrapper[4766]: I1213 03:45:55.941528 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:55Z","lastTransitionTime":"2025-12-13T03:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.045247 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.045335 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.045381 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.045472 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.045496 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:56Z","lastTransitionTime":"2025-12-13T03:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.149310 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.149361 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.149371 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.149390 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.149400 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:56Z","lastTransitionTime":"2025-12-13T03:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.253221 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.253294 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.253321 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.253357 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.253382 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:56Z","lastTransitionTime":"2025-12-13T03:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.394690 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.394868 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.394884 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.394905 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.394919 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:56Z","lastTransitionTime":"2025-12-13T03:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.498680 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.498728 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.498740 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.498760 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.498773 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:56Z","lastTransitionTime":"2025-12-13T03:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.602189 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.602248 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.602265 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.602293 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.602314 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:56Z","lastTransitionTime":"2025-12-13T03:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.615621 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.615724 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.615807 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:56 crc kubenswrapper[4766]: E1213 03:45:56.615965 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:45:56 crc kubenswrapper[4766]: E1213 03:45:56.616123 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:45:56 crc kubenswrapper[4766]: E1213 03:45:56.616249 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.706192 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.706243 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.706257 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.706278 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.706291 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:56Z","lastTransitionTime":"2025-12-13T03:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.809994 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.810081 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.810103 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.810132 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.810168 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:56Z","lastTransitionTime":"2025-12-13T03:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.914381 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.914488 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.914512 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.914545 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:56 crc kubenswrapper[4766]: I1213 03:45:56.914566 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:56Z","lastTransitionTime":"2025-12-13T03:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.019124 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.019220 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.019246 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.019280 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.019301 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:57Z","lastTransitionTime":"2025-12-13T03:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.123401 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.123491 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.123503 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.123521 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.123538 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:57Z","lastTransitionTime":"2025-12-13T03:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.227128 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.227220 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.227238 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.227259 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.227270 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:57Z","lastTransitionTime":"2025-12-13T03:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.330373 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.330467 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.330479 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.330499 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.330543 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:57Z","lastTransitionTime":"2025-12-13T03:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.434247 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.434292 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.434303 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.434322 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.434332 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:57Z","lastTransitionTime":"2025-12-13T03:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.538006 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.538051 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.538065 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.538092 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.538107 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:57Z","lastTransitionTime":"2025-12-13T03:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.616356 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:57 crc kubenswrapper[4766]: E1213 03:45:57.616578 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.641500 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.641542 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.641553 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.641573 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.641586 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:57Z","lastTransitionTime":"2025-12-13T03:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.744395 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.744446 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.744456 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.744473 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.744483 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:57Z","lastTransitionTime":"2025-12-13T03:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.849595 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.850122 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.850135 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.850162 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.850176 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:57Z","lastTransitionTime":"2025-12-13T03:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.953249 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.953331 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.953342 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.953365 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:57 crc kubenswrapper[4766]: I1213 03:45:57.953381 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:57Z","lastTransitionTime":"2025-12-13T03:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.057119 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.057182 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.057193 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.057212 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.057223 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:58Z","lastTransitionTime":"2025-12-13T03:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.160574 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.160829 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.160857 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.160912 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.160927 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:58Z","lastTransitionTime":"2025-12-13T03:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.264643 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.264696 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.264707 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.264725 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.264736 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:58Z","lastTransitionTime":"2025-12-13T03:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.367371 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.367444 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.367454 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.367478 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.367494 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:58Z","lastTransitionTime":"2025-12-13T03:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.471484 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.471571 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.471590 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.471621 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.471642 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:58Z","lastTransitionTime":"2025-12-13T03:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.574023 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.574092 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.574105 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.574125 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.574137 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:58Z","lastTransitionTime":"2025-12-13T03:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.615779 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.615835 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.615779 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:45:58 crc kubenswrapper[4766]: E1213 03:45:58.615980 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:45:58 crc kubenswrapper[4766]: E1213 03:45:58.616139 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:45:58 crc kubenswrapper[4766]: E1213 03:45:58.616396 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.676823 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.676909 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.676925 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.676973 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.676988 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:58Z","lastTransitionTime":"2025-12-13T03:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.780615 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.780697 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.780717 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.780748 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.780769 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:58Z","lastTransitionTime":"2025-12-13T03:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.884710 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.884783 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.884802 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.884831 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.884853 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:58Z","lastTransitionTime":"2025-12-13T03:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.988231 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.988311 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.988335 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.988369 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:58 crc kubenswrapper[4766]: I1213 03:45:58.988398 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:58Z","lastTransitionTime":"2025-12-13T03:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.090824 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.090884 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.090898 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.090918 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.090932 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:59Z","lastTransitionTime":"2025-12-13T03:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.193870 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.193923 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.193934 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.193952 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.193966 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:59Z","lastTransitionTime":"2025-12-13T03:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.297279 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.297356 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.297371 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.297393 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.297646 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:59Z","lastTransitionTime":"2025-12-13T03:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.400541 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.400595 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.400611 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.400640 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.400659 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:59Z","lastTransitionTime":"2025-12-13T03:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.504410 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.504520 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.504539 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.504571 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.504590 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:59Z","lastTransitionTime":"2025-12-13T03:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.607831 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.607903 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.607922 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.607955 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.607976 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:59Z","lastTransitionTime":"2025-12-13T03:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.615581 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:45:59 crc kubenswrapper[4766]: E1213 03:45:59.615782 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.630735 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.633807 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c29363fc-e8a6-40d5-a73f-4bac6b47f073\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2c24d5ee6235813ce84ab1be241b9812df0541d5caefec1daa25ff912d31e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a615f91bcd70b5108f8a100a56aba2d311a265d13e8d2468f56720d88cab3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467ab5b59595a4d878501ba9fd0d98d3fbee40937722bf79484c10d1bf230253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81bcb7b38fd64f78f1d79a333a8ec77d630c276d966dd603b39b0f3eaa14310d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81bcb7b38fd64f78f1d79a333a8ec77d630c276d966dd603b39b0f3eaa14310d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:59Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.647804 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:59Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.661215 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:59Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.676483 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b40427c-c501-4c74-a7e3-2e6f1343bc03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a79778b7b5de56ba93a75c81deac1a836df2ffb1b94c804d0b630800663dd443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://455382a79dfb5c2f3d7ad8fef604af58cc2b4ff33c3c31ffe0961205c4368b23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ww5fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:59Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.691101 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:59Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.706868 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:59Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.711319 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.711364 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.711376 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.711404 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.711418 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:59Z","lastTransitionTime":"2025-12-13T03:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.727966 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ada8692e77f3fdcfb7af0831f398beba03791a2e7286360296b007c809a329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-13T03:45:51Z\\\",\\\"message\\\":\\\"2025-12-13T03:45:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5a02f7ef-fa69-4aca-9147-cbc0b38dbb8a\\\\n2025-12-13T03:45:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5a02f7ef-fa69-4aca-9147-cbc0b38dbb8a to /host/opt/cni/bin/\\\\n2025-12-13T03:45:05Z [verbose] multus-daemon started\\\\n2025-12-13T03:45:05Z [verbose] Readiness Indicator file check\\\\n2025-12-13T03:45:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:59Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.739286 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:59Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.755904 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:59Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.776827 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:59Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.802494 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd1244423f54b79ded5573e82039b04eef50bf8c9c9dcac8891d356a3933358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd1244423f54b79ded5573e82039b04eef50bf8c9c9dcac8891d356a3933358\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-13T03:45:37Z\\\",\\\"message\\\":\\\"removal\\\\nI1213 03:45:37.078702 6548 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1213 03:45:37.078741 6548 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1213 03:45:37.078794 6548 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1213 03:45:37.078804 6548 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1213 03:45:37.078843 6548 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1213 03:45:37.078856 6548 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1213 03:45:37.078863 6548 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1213 03:45:37.078870 6548 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1213 03:45:37.078875 6548 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1213 03:45:37.078990 6548 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1213 03:45:37.079038 6548 factory.go:656] Stopping watch factory\\\\nI1213 03:45:37.079051 6548 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1213 03:45:37.079091 6548 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2dfkj_openshift-ovn-kubernetes(c2621562-4c91-40a3-ad72-29d325404496)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:59Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.813535 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.813573 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.813586 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.813606 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.813625 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:59Z","lastTransitionTime":"2025-12-13T03:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.825333 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb2779e9-fd47-4ea6-a75c-b0c24339b1c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c82736d8d5e400aa9f77b6e3481b75b3127380fbfe2ab6b998360b771a41dc8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T03:44:53Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 03:44:52.250326 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 03:44:52.250536 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 03:44:52.251699 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1102693067/tls.crt::/tmp/serving-cert-1102693067/tls.key\\\\\\\"\\\\nI1213 03:44:52.913338 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 03:44:52.918333 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 03:44:52.918373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 03:44:52.918488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 03:44:52.918505 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 03:44:52.925473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 03:44:52.925500 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925505 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 03:44:52.925512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 03:44:52.925515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 03:44:52.925518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 03:44:52.926233 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 03:44:52.929662 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:59Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.842349 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:59Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.857104 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:59Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.871173 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:59Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.904112 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7587993e5f7e3b62e3eb1456ca64e857ba0e4daf15fdea9c546c59f50f853517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:59Z is after 2025-08-24T17:21:41Z" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.915792 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.915827 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.915839 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.915858 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.915871 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:45:59Z","lastTransitionTime":"2025-12-13T03:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:45:59 crc kubenswrapper[4766]: I1213 03:45:59.918852 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvxrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84c9636d-a525-40e8-bc35-af07ecbdeafc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvxrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:45:59Z is after 2025-08-24T17:21:41Z" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.018252 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.018315 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.018332 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.018360 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.018376 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:00Z","lastTransitionTime":"2025-12-13T03:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.120630 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.120675 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.120688 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.120725 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.120736 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:00Z","lastTransitionTime":"2025-12-13T03:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.223440 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.223480 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.223492 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.223510 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.223521 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:00Z","lastTransitionTime":"2025-12-13T03:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.326409 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.326472 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.326485 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.326504 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.326517 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:00Z","lastTransitionTime":"2025-12-13T03:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.429414 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.429472 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.429482 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.429501 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.429514 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:00Z","lastTransitionTime":"2025-12-13T03:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.532544 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.532613 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.532642 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.532671 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.532689 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:00Z","lastTransitionTime":"2025-12-13T03:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.615608 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.615675 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.615690 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:46:00 crc kubenswrapper[4766]: E1213 03:46:00.615842 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:46:00 crc kubenswrapper[4766]: E1213 03:46:00.616113 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:46:00 crc kubenswrapper[4766]: E1213 03:46:00.616237 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.635418 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.635499 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.635513 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.635560 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.635575 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:00Z","lastTransitionTime":"2025-12-13T03:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.738693 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.738789 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.738836 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.738875 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.738902 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:00Z","lastTransitionTime":"2025-12-13T03:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.842222 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.842293 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.842316 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.842357 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.842386 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:00Z","lastTransitionTime":"2025-12-13T03:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.945896 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.945961 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.945975 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.945997 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:00 crc kubenswrapper[4766]: I1213 03:46:00.946008 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:00Z","lastTransitionTime":"2025-12-13T03:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.048381 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.048474 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.048507 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.048527 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.048537 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:01Z","lastTransitionTime":"2025-12-13T03:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.150924 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.150969 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.150978 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.150993 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.151002 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:01Z","lastTransitionTime":"2025-12-13T03:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.254124 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.254177 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.254191 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.254212 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.254227 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:01Z","lastTransitionTime":"2025-12-13T03:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.356927 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.356990 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.357005 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.357026 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.357036 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:01Z","lastTransitionTime":"2025-12-13T03:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.460641 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.460692 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.460704 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.460723 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.460736 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:01Z","lastTransitionTime":"2025-12-13T03:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.563802 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.563861 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.563875 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.563900 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.563916 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:01Z","lastTransitionTime":"2025-12-13T03:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.585037 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:46:01 crc kubenswrapper[4766]: E1213 03:46:01.585257 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:05.585216891 +0000 UTC m=+157.095149855 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.585321 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.585349 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.585382 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.585510 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:46:01 crc kubenswrapper[4766]: E1213 03:46:01.585605 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 03:46:01 crc kubenswrapper[4766]: E1213 03:46:01.585631 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 03:46:01 crc kubenswrapper[4766]: E1213 03:46:01.585645 4766 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 03:46:01 crc kubenswrapper[4766]: E1213 03:46:01.585651 4766 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 03:46:01 crc kubenswrapper[4766]: E1213 03:46:01.585661 4766 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 03:46:01 crc kubenswrapper[4766]: E1213 03:46:01.585705 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-13 03:47:05.585695945 +0000 UTC m=+157.095628909 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 03:46:01 crc kubenswrapper[4766]: E1213 03:46:01.585802 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 03:47:05.585771238 +0000 UTC m=+157.095704212 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 03:46:01 crc kubenswrapper[4766]: E1213 03:46:01.585839 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 03:47:05.585823879 +0000 UTC m=+157.095756853 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 03:46:01 crc kubenswrapper[4766]: E1213 03:46:01.585867 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 03:46:01 crc kubenswrapper[4766]: E1213 03:46:01.585912 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 03:46:01 crc kubenswrapper[4766]: E1213 03:46:01.585952 4766 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 03:46:01 crc kubenswrapper[4766]: E1213 03:46:01.586052 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-13 03:47:05.586031305 +0000 UTC m=+157.095964269 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.616020 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:46:01 crc kubenswrapper[4766]: E1213 03:46:01.616244 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.668019 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.668075 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.668086 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.668105 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.668118 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:01Z","lastTransitionTime":"2025-12-13T03:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.771621 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.771667 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.771694 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.771716 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.771734 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:01Z","lastTransitionTime":"2025-12-13T03:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.874875 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.874941 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.874957 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.874983 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.874996 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:01Z","lastTransitionTime":"2025-12-13T03:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.978991 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.979052 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.979065 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.979086 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:01 crc kubenswrapper[4766]: I1213 03:46:01.979099 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:01Z","lastTransitionTime":"2025-12-13T03:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.081641 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.081673 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.081682 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.081701 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.081712 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:02Z","lastTransitionTime":"2025-12-13T03:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.184805 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.184846 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.184857 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.184873 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.184884 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:02Z","lastTransitionTime":"2025-12-13T03:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.288587 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.288645 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.288656 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.288676 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.288689 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:02Z","lastTransitionTime":"2025-12-13T03:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.391560 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.391602 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.391611 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.391627 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.391640 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:02Z","lastTransitionTime":"2025-12-13T03:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.494325 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.494379 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.494610 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.494634 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.494654 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:02Z","lastTransitionTime":"2025-12-13T03:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.596875 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.596926 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.596935 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.596955 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.596967 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:02Z","lastTransitionTime":"2025-12-13T03:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.616274 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.616346 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.616276 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:46:02 crc kubenswrapper[4766]: E1213 03:46:02.616569 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:46:02 crc kubenswrapper[4766]: E1213 03:46:02.616775 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:46:02 crc kubenswrapper[4766]: E1213 03:46:02.617215 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.617656 4766 scope.go:117] "RemoveContainer" containerID="0bd1244423f54b79ded5573e82039b04eef50bf8c9c9dcac8891d356a3933358" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.699352 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.699399 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.699409 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.699440 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.699452 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:02Z","lastTransitionTime":"2025-12-13T03:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.802658 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.802723 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.802736 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.802757 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.802770 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:02Z","lastTransitionTime":"2025-12-13T03:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.904788 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.904832 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.904845 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.904865 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:02 crc kubenswrapper[4766]: I1213 03:46:02.904877 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:02Z","lastTransitionTime":"2025-12-13T03:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.009711 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.009766 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.009782 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.009801 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.009820 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:03Z","lastTransitionTime":"2025-12-13T03:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.095791 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2dfkj_c2621562-4c91-40a3-ad72-29d325404496/ovnkube-controller/2.log" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.097781 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" event={"ID":"c2621562-4c91-40a3-ad72-29d325404496","Type":"ContainerStarted","Data":"aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8"} Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.098237 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.113329 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.113364 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.113376 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.113395 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.113412 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:03Z","lastTransitionTime":"2025-12-13T03:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.117883 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:46:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.135101 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:46:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.152838 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6n4vc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b724d1e1-9ded-434e-b852-f5233f27ef32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ada8692e77f3fdcfb7af0831f398beba03791a2e7286360296b007c809a329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-13T03:45:51Z\\\",\\\"message\\\":\\\"2025-12-13T03:45:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5a02f7ef-fa69-4aca-9147-cbc0b38dbb8a\\\\n2025-12-13T03:45:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5a02f7ef-fa69-4aca-9147-cbc0b38dbb8a to /host/opt/cni/bin/\\\\n2025-12-13T03:45:05Z [verbose] multus-daemon started\\\\n2025-12-13T03:45:05Z [verbose] Readiness Indicator file check\\\\n2025-12-13T03:45:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdw2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6n4vc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:46:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.166092 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6hlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a99c6a2-c76c-4551-8e9f-a046e4723fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8507e51833dcb22a743e1089c76fbd0b885ad161fb76cd49961796c1d82808f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtcwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6hlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:46:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.184859 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b40427c-c501-4c74-a7e3-2e6f1343bc03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a79778b7b5de56ba93a75c81deac1a836df2ffb1b94c804d0b630800663dd443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://455382a79dfb5c2f3d7ad8fef604af58cc2b4ff33c3c31ffe0961205c4368b23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfkml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ww5fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:46:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.207421 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb2779e9-fd47-4ea6-a75c-b0c24339b1c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c82736d8d5e400aa9f77b6e3481b75b3127380fbfe2ab6b998360b771a41dc8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T03:44:53Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 03:44:52.250326 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 03:44:52.250536 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 03:44:52.251699 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1102693067/tls.crt::/tmp/serving-cert-1102693067/tls.key\\\\\\\"\\\\nI1213 03:44:52.913338 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 03:44:52.918333 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 03:44:52.918373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 03:44:52.918488 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 03:44:52.918505 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 03:44:52.925473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1213 03:44:52.925500 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925505 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 03:44:52.925509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 03:44:52.925512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 03:44:52.925515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 03:44:52.925518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1213 03:44:52.926233 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1213 03:44:52.929662 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:46:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.216650 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.216686 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.216699 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.216718 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.216755 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:03Z","lastTransitionTime":"2025-12-13T03:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.226961 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe94c56a-7509-4186-bbcd-4802efe04d13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3029ef04617f0e5c14398e60155d2aedb444e077dfde7aae94ae9bd4c3e15522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dc2c32d6d6309d985178336665b46ef2649dabba60c341cad3edc6c1e92dbe7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d3f21cc1396dfefd4900b67321d5362cb47c350737ffbc84a5146a70892d2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:46:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.241391 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e4788b5-8be3-41ac-9bc6-2e8084f2cff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4af5ec01dcfebc7f2efef3b21b8a15139f420a6f353fa1931e9a6301af5b5779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://695be1c53bcda062deadefe90c3b3c333ce4631f2b0ec3234d15d844337263a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695be1c53bcda062deadefe90c3b3c333ce4631f2b0ec3234d15d844337263a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:46:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.257654 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b198d9863c41ba36714a220c745d1ba5b11cee1c052038b4799712ceb511dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:46:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.273053 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5afeeb63083965c59318b70702c99af05443f15a4fcfae7b95c77f0ca05b5e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:46:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.286040 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:46:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.299214 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5affdeee562d6934800b55c4b6fb659b32549ed5793b8c77bd5390c74e435fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea4091ab00354ae3487ceb5671d2ec71bb9b7817c86a551a839399236a6f08d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:46:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.319743 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.319786 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.319798 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.319818 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.319832 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:03Z","lastTransitionTime":"2025-12-13T03:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.321989 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2621562-4c91-40a3-ad72-29d325404496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd1244423f54b79ded5573e82039b04eef50bf8c9c9dcac8891d356a3933358\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-13T03:45:37Z\\\",\\\"message\\\":\\\"removal\\\\nI1213 03:45:37.078702 6548 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1213 03:45:37.078741 6548 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1213 03:45:37.078794 6548 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1213 03:45:37.078804 6548 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1213 03:45:37.078843 6548 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1213 03:45:37.078856 6548 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1213 03:45:37.078863 6548 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1213 03:45:37.078870 6548 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1213 03:45:37.078875 6548 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1213 03:45:37.078990 6548 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1213 03:45:37.079038 6548 factory.go:656] Stopping watch factory\\\\nI1213 03:45:37.079051 6548 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1213 03:45:37.079091 6548 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9gcjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2dfkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:46:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.339633 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c89xg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c14fd8-7cc0-4f63-8900-c0ae7306d019\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7587993e5f7e3b62e3eb1456ca64e857ba0e4daf15fdea9c546c59f50f853517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf3a71a1c5902abb540ea19a5c4b00803ef011c87f87e14d2a748dfd1b53a989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05add39394d0000f689044d098722cddbd7b52859911e1ccb77c2a76dc7458c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8a8424994d8d618cbb8eea8d48c122ebabeaa0c23eb68a51cae2e696b7041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcd97138809b19d7c63db2e2a3ded7ea5a8ecddc79bec58a1cd96a1e62acfef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b15acc50e532366f7d27fb89d0dddb6ad9dffae05b1da86c595c7e22624c136\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9b697dccd6dde39814c19856d16e17fc34ecd734fb2c1421584c70faf52e21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4ghr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c89xg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:46:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.351883 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvxrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84c9636d-a525-40e8-bc35-af07ecbdeafc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5n59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:45:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvxrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:46:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.368376 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c29363fc-e8a6-40d5-a73f-4bac6b47f073\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:45:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2c24d5ee6235813ce84ab1be241b9812df0541d5caefec1daa25ff912d31e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a615f91bcd70b5108f8a100a56aba2d311a265d13e8d2468f56720d88cab3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467ab5b59595a4d878501ba9fd0d98d3fbee40937722bf79484c10d1bf230253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81bcb7b38fd64f78f1d79a333a8ec77d630c276d966dd603b39b0f3eaa14310d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81bcb7b38fd64f78f1d79a333a8ec77d630c276d966dd603b39b0f3eaa14310d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T03:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T03:44:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:46:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.384810 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e6a48b-4f5d-4299-9c7b-98dbe11e670e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://462190e3c33dd39f6bfae528c15bdef87e92034868ec5c2fb2d6bba8a845ab05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpg8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94w9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:46:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.398373 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rxssr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ad063a-190c-4789-ab91-fb0909fde2ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T03:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617cacd4ad12e82f521eb4a306fd98154c7449ac2a11c07de11816780d9c7e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T03:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmdv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T03:44:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rxssr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:46:03Z is after 2025-08-24T17:21:41Z" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.422664 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.422707 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.422718 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.422740 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.422753 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:03Z","lastTransitionTime":"2025-12-13T03:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.526797 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.526869 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.526887 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.526915 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.526940 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:03Z","lastTransitionTime":"2025-12-13T03:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.616190 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:46:03 crc kubenswrapper[4766]: E1213 03:46:03.616401 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.629872 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.629911 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.629921 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.629935 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.629945 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:03Z","lastTransitionTime":"2025-12-13T03:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.637008 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.733021 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.733089 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.733106 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.733130 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.733143 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:03Z","lastTransitionTime":"2025-12-13T03:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.836268 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.836314 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.836325 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.836346 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.836356 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:03Z","lastTransitionTime":"2025-12-13T03:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.939052 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.939099 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.939111 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.939130 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:03 crc kubenswrapper[4766]: I1213 03:46:03.939141 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:03Z","lastTransitionTime":"2025-12-13T03:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.042512 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.042556 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.042567 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.042584 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.042597 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:04Z","lastTransitionTime":"2025-12-13T03:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.145313 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.145378 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.145391 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.145413 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.145448 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:04Z","lastTransitionTime":"2025-12-13T03:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.248633 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.248685 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.248699 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.248719 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.248729 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:04Z","lastTransitionTime":"2025-12-13T03:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.352359 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.352412 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.352458 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.352483 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.352681 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:04Z","lastTransitionTime":"2025-12-13T03:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.455828 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.455890 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.455910 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.455939 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.455955 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:04Z","lastTransitionTime":"2025-12-13T03:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.558898 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.558945 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.558955 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.558973 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.558983 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:04Z","lastTransitionTime":"2025-12-13T03:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.615708 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.615797 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.615874 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:46:04 crc kubenswrapper[4766]: E1213 03:46:04.615897 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:46:04 crc kubenswrapper[4766]: E1213 03:46:04.616307 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:46:04 crc kubenswrapper[4766]: E1213 03:46:04.616336 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.699898 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.700008 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.700034 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.700072 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.700097 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:04Z","lastTransitionTime":"2025-12-13T03:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.803354 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.803397 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.803408 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.803451 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.803467 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:04Z","lastTransitionTime":"2025-12-13T03:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.906834 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.906915 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.906931 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.906961 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:04 crc kubenswrapper[4766]: I1213 03:46:04.906981 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:04Z","lastTransitionTime":"2025-12-13T03:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.010446 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.010505 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.010516 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.010537 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.010549 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:05Z","lastTransitionTime":"2025-12-13T03:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.040931 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.041001 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.041010 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.041031 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.041048 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:05Z","lastTransitionTime":"2025-12-13T03:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:05 crc kubenswrapper[4766]: E1213 03:46:05.060020 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:46:05Z is after 2025-08-24T17:21:41Z" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.066170 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.066242 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.066263 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.066299 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.066317 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:05Z","lastTransitionTime":"2025-12-13T03:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:05 crc kubenswrapper[4766]: E1213 03:46:05.084157 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:46:05Z is after 2025-08-24T17:21:41Z" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.090340 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.090390 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.090407 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.090447 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.090463 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:05Z","lastTransitionTime":"2025-12-13T03:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:05 crc kubenswrapper[4766]: E1213 03:46:05.112827 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:46:05Z is after 2025-08-24T17:21:41Z" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.118927 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.118963 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.118972 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.118992 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.119003 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:05Z","lastTransitionTime":"2025-12-13T03:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:05 crc kubenswrapper[4766]: E1213 03:46:05.145931 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:46:05Z is after 2025-08-24T17:21:41Z" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.153888 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.153953 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.153965 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.153986 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.153999 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:05Z","lastTransitionTime":"2025-12-13T03:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:05 crc kubenswrapper[4766]: E1213 03:46:05.173230 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T03:46:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a96b01-4309-433a-9f85-4245993e96cc\\\",\\\"systemUUID\\\":\\\"2794a81b-3be3-453b-be1b-91ab43e5fda5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T03:46:05Z is after 2025-08-24T17:21:41Z" Dec 13 03:46:05 crc kubenswrapper[4766]: E1213 03:46:05.173365 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.179094 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.179178 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.179204 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.179234 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.179261 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:05Z","lastTransitionTime":"2025-12-13T03:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.282500 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.282544 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.282558 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.282579 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.282591 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:05Z","lastTransitionTime":"2025-12-13T03:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.386381 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.386493 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.386509 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.386532 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.386545 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:05Z","lastTransitionTime":"2025-12-13T03:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.490971 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.491040 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.491060 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.491086 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.491100 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:05Z","lastTransitionTime":"2025-12-13T03:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.594683 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.595099 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.595117 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.595141 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.595153 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:05Z","lastTransitionTime":"2025-12-13T03:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.616222 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:46:05 crc kubenswrapper[4766]: E1213 03:46:05.616710 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.699402 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.699517 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.699548 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.699583 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.699609 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:05Z","lastTransitionTime":"2025-12-13T03:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.803593 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.803659 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.803678 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.803727 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.803740 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:05Z","lastTransitionTime":"2025-12-13T03:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.907106 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.907208 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.907221 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.907260 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:05 crc kubenswrapper[4766]: I1213 03:46:05.907273 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:05Z","lastTransitionTime":"2025-12-13T03:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.009747 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.009799 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.009809 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.009827 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.009838 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:06Z","lastTransitionTime":"2025-12-13T03:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.112287 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2dfkj_c2621562-4c91-40a3-ad72-29d325404496/ovnkube-controller/3.log" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.112347 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.112388 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.112399 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.112420 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.112448 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:06Z","lastTransitionTime":"2025-12-13T03:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.113026 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2dfkj_c2621562-4c91-40a3-ad72-29d325404496/ovnkube-controller/2.log" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.116493 4766 generic.go:334] "Generic (PLEG): container finished" podID="c2621562-4c91-40a3-ad72-29d325404496" containerID="aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8" exitCode=1 Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.116545 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" event={"ID":"c2621562-4c91-40a3-ad72-29d325404496","Type":"ContainerDied","Data":"aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8"} Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.116606 4766 scope.go:117] "RemoveContainer" containerID="0bd1244423f54b79ded5573e82039b04eef50bf8c9c9dcac8891d356a3933358" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.117479 4766 scope.go:117] "RemoveContainer" containerID="aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8" Dec 13 03:46:06 crc kubenswrapper[4766]: E1213 03:46:06.117701 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2dfkj_openshift-ovn-kubernetes(c2621562-4c91-40a3-ad72-29d325404496)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" podUID="c2621562-4c91-40a3-ad72-29d325404496" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.179538 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=65.179511154 podStartE2EDuration="1m5.179511154s" podCreationTimestamp="2025-12-13 03:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:46:06.177405282 +0000 UTC m=+97.687338266" watchObservedRunningTime="2025-12-13 03:46:06.179511154 +0000 UTC m=+97.689444118" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.211359 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=7.211329964 podStartE2EDuration="7.211329964s" podCreationTimestamp="2025-12-13 03:45:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:46:06.210896972 +0000 UTC m=+97.720829936" watchObservedRunningTime="2025-12-13 03:46:06.211329964 +0000 UTC m=+97.721262928" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.211727 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=69.211720586 podStartE2EDuration="1m9.211720586s" podCreationTimestamp="2025-12-13 03:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:46:06.194787755 +0000 UTC m=+97.704720719" watchObservedRunningTime="2025-12-13 03:46:06.211720586 +0000 UTC m=+97.721653550" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.215339 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.215487 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.215499 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.215526 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.215536 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:06Z","lastTransitionTime":"2025-12-13T03:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.313325 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-c89xg" podStartSLOduration=73.313302808 podStartE2EDuration="1m13.313302808s" podCreationTimestamp="2025-12-13 03:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:46:06.297909434 +0000 UTC m=+97.807842398" watchObservedRunningTime="2025-12-13 03:46:06.313302808 +0000 UTC m=+97.823235772" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.318385 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.318421 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.318463 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.318477 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.318488 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:06Z","lastTransitionTime":"2025-12-13T03:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.329185 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=40.329154826999996 podStartE2EDuration="40.329154827s" podCreationTimestamp="2025-12-13 03:45:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:46:06.328015253 +0000 UTC m=+97.837948217" watchObservedRunningTime="2025-12-13 03:46:06.329154827 +0000 UTC m=+97.839087791" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.368456 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podStartSLOduration=73.368414587 podStartE2EDuration="1m13.368414587s" podCreationTimestamp="2025-12-13 03:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:46:06.346571842 +0000 UTC m=+97.856504806" watchObservedRunningTime="2025-12-13 03:46:06.368414587 +0000 UTC m=+97.878347551" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.369017 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-rxssr" podStartSLOduration=73.369011455 podStartE2EDuration="1m13.369011455s" podCreationTimestamp="2025-12-13 03:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:46:06.368348945 +0000 UTC m=+97.878281909" watchObservedRunningTime="2025-12-13 03:46:06.369011455 +0000 UTC m=+97.878944419" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.399534 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=3.399508777 podStartE2EDuration="3.399508777s" podCreationTimestamp="2025-12-13 03:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:46:06.398450195 +0000 UTC m=+97.908383159" watchObservedRunningTime="2025-12-13 03:46:06.399508777 +0000 UTC m=+97.909441741" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.422215 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.422287 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.422302 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.422328 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.422343 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:06Z","lastTransitionTime":"2025-12-13T03:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.450144 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-6n4vc" podStartSLOduration=73.450113432 podStartE2EDuration="1m13.450113432s" podCreationTimestamp="2025-12-13 03:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:46:06.448736581 +0000 UTC m=+97.958669555" watchObservedRunningTime="2025-12-13 03:46:06.450113432 +0000 UTC m=+97.960046406" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.464019 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-n6hlf" podStartSLOduration=73.463991722 podStartE2EDuration="1m13.463991722s" podCreationTimestamp="2025-12-13 03:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:46:06.463342783 +0000 UTC m=+97.973275747" watchObservedRunningTime="2025-12-13 03:46:06.463991722 +0000 UTC m=+97.973924686" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.486057 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ww5fb" podStartSLOduration=72.486027684 podStartE2EDuration="1m12.486027684s" podCreationTimestamp="2025-12-13 03:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:46:06.485351864 +0000 UTC m=+97.995284848" watchObservedRunningTime="2025-12-13 03:46:06.486027684 +0000 UTC m=+97.995960648" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.525707 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.525762 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.525775 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.525797 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.525811 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:06Z","lastTransitionTime":"2025-12-13T03:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.615856 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.615995 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.616024 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:46:06 crc kubenswrapper[4766]: E1213 03:46:06.616221 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:46:06 crc kubenswrapper[4766]: E1213 03:46:06.616334 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:46:06 crc kubenswrapper[4766]: E1213 03:46:06.616458 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.628333 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.628377 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.628390 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.628410 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.628420 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:06Z","lastTransitionTime":"2025-12-13T03:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.730906 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.730951 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.730965 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.730985 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.730996 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:06Z","lastTransitionTime":"2025-12-13T03:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.834130 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.834177 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.834195 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.834218 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.834233 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:06Z","lastTransitionTime":"2025-12-13T03:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.937029 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.937089 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.937103 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.937123 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:06 crc kubenswrapper[4766]: I1213 03:46:06.937135 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:06Z","lastTransitionTime":"2025-12-13T03:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.041057 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.041153 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.041171 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.041202 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.041223 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:07Z","lastTransitionTime":"2025-12-13T03:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.123699 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2dfkj_c2621562-4c91-40a3-ad72-29d325404496/ovnkube-controller/3.log" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.144508 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.144582 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.144606 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.144637 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.144661 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:07Z","lastTransitionTime":"2025-12-13T03:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.248338 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.248394 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.248410 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.248444 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.248461 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:07Z","lastTransitionTime":"2025-12-13T03:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.387081 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.387139 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.387153 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.387193 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.387204 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:07Z","lastTransitionTime":"2025-12-13T03:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.489807 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.489865 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.489877 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.489896 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.489909 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:07Z","lastTransitionTime":"2025-12-13T03:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.592246 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.592293 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.592357 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.592375 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.592385 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:07Z","lastTransitionTime":"2025-12-13T03:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.616151 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:46:07 crc kubenswrapper[4766]: E1213 03:46:07.616338 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.695817 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.695870 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.695882 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.695903 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.695919 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:07Z","lastTransitionTime":"2025-12-13T03:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.798529 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.798615 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.798627 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.798658 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.798669 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:07Z","lastTransitionTime":"2025-12-13T03:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.901092 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.901148 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.901157 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.901177 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:07 crc kubenswrapper[4766]: I1213 03:46:07.901190 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:07Z","lastTransitionTime":"2025-12-13T03:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.004246 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.004321 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.004344 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.004382 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.004416 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:08Z","lastTransitionTime":"2025-12-13T03:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.107624 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.107677 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.107691 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.107709 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.107729 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:08Z","lastTransitionTime":"2025-12-13T03:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.256863 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.256965 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.256977 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.256994 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.257033 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:08Z","lastTransitionTime":"2025-12-13T03:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.359637 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.359686 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.359696 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.359716 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.359728 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:08Z","lastTransitionTime":"2025-12-13T03:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.462920 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.462964 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.462975 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.462992 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.463002 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:08Z","lastTransitionTime":"2025-12-13T03:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.566384 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.566459 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.566472 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.566495 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.566510 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:08Z","lastTransitionTime":"2025-12-13T03:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.615227 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.615234 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:46:08 crc kubenswrapper[4766]: E1213 03:46:08.615368 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.615251 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:46:08 crc kubenswrapper[4766]: E1213 03:46:08.615631 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:46:08 crc kubenswrapper[4766]: E1213 03:46:08.615831 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.669044 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.669112 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.669126 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.669151 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.669167 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:08Z","lastTransitionTime":"2025-12-13T03:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.826993 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.827090 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.827109 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.827132 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.827148 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:08Z","lastTransitionTime":"2025-12-13T03:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.929625 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.929698 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.929714 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.929739 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:08 crc kubenswrapper[4766]: I1213 03:46:08.929751 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:08Z","lastTransitionTime":"2025-12-13T03:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.032910 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.032977 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.032986 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.033003 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.033015 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:09Z","lastTransitionTime":"2025-12-13T03:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.136075 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.136127 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.136138 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.136159 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.136171 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:09Z","lastTransitionTime":"2025-12-13T03:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.239879 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.240032 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.240073 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.240108 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.240136 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:09Z","lastTransitionTime":"2025-12-13T03:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.343998 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.344077 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.344096 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.344123 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.344143 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:09Z","lastTransitionTime":"2025-12-13T03:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.448044 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.448098 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.448111 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.448132 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.448145 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:09Z","lastTransitionTime":"2025-12-13T03:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.551024 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.551072 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.551086 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.551105 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.551116 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:09Z","lastTransitionTime":"2025-12-13T03:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.615422 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:46:09 crc kubenswrapper[4766]: E1213 03:46:09.616783 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.653133 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.653208 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.653227 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.653246 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.653258 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:09Z","lastTransitionTime":"2025-12-13T03:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.756650 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.756721 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.756739 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.756758 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.756768 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:09Z","lastTransitionTime":"2025-12-13T03:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.859667 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.859771 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.859792 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.859824 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.859860 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:09Z","lastTransitionTime":"2025-12-13T03:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.963805 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.963869 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.963889 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.963918 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:09 crc kubenswrapper[4766]: I1213 03:46:09.963938 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:09Z","lastTransitionTime":"2025-12-13T03:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.066807 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.066856 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.066871 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.066893 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.066906 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:10Z","lastTransitionTime":"2025-12-13T03:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.170571 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.170670 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.170703 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.170742 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.170769 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:10Z","lastTransitionTime":"2025-12-13T03:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.276486 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.276549 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.276563 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.276592 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.276605 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:10Z","lastTransitionTime":"2025-12-13T03:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.379078 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.379125 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.379135 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.379162 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.379180 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:10Z","lastTransitionTime":"2025-12-13T03:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.483619 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.483679 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.483693 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.483713 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.483726 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:10Z","lastTransitionTime":"2025-12-13T03:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.587512 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.587577 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.587591 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.587613 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.587625 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:10Z","lastTransitionTime":"2025-12-13T03:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.616235 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:46:10 crc kubenswrapper[4766]: E1213 03:46:10.616463 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.616627 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.616766 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:46:10 crc kubenswrapper[4766]: E1213 03:46:10.616941 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:46:10 crc kubenswrapper[4766]: E1213 03:46:10.617022 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.690867 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.690916 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.690929 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.690949 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.690963 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:10Z","lastTransitionTime":"2025-12-13T03:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.794564 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.794627 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.794646 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.794675 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.794696 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:10Z","lastTransitionTime":"2025-12-13T03:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.898221 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.898387 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.898513 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.898545 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:10 crc kubenswrapper[4766]: I1213 03:46:10.898610 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:10Z","lastTransitionTime":"2025-12-13T03:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.003320 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.003398 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.003414 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.003460 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.003479 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:11Z","lastTransitionTime":"2025-12-13T03:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.106608 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.106659 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.106686 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.106711 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.106727 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:11Z","lastTransitionTime":"2025-12-13T03:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.209822 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.209865 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.209875 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.209893 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.209904 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:11Z","lastTransitionTime":"2025-12-13T03:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.312246 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.312299 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.312312 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.312356 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.312369 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:11Z","lastTransitionTime":"2025-12-13T03:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.415924 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.415996 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.416021 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.416052 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.416075 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:11Z","lastTransitionTime":"2025-12-13T03:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.519599 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.519692 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.519733 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.519793 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.519823 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:11Z","lastTransitionTime":"2025-12-13T03:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.616414 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:46:11 crc kubenswrapper[4766]: E1213 03:46:11.616617 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.673577 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.673633 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.673647 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.673669 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.673684 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:11Z","lastTransitionTime":"2025-12-13T03:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.776786 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.776855 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.776871 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.776894 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.776909 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:11Z","lastTransitionTime":"2025-12-13T03:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.879765 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.879835 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.879849 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.879874 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.879886 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:11Z","lastTransitionTime":"2025-12-13T03:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.983356 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.983475 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.983500 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.983529 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:11 crc kubenswrapper[4766]: I1213 03:46:11.983556 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:11Z","lastTransitionTime":"2025-12-13T03:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.085970 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.086022 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.086033 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.086050 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.086066 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:12Z","lastTransitionTime":"2025-12-13T03:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.189949 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.190020 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.190038 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.190071 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.190091 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:12Z","lastTransitionTime":"2025-12-13T03:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.292506 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.292606 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.292622 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.292705 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.292734 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:12Z","lastTransitionTime":"2025-12-13T03:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.396490 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.396558 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.396577 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.396603 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.396622 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:12Z","lastTransitionTime":"2025-12-13T03:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.500314 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.500409 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.500467 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.500498 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.500516 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:12Z","lastTransitionTime":"2025-12-13T03:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.604528 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.604620 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.604638 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.604660 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.604672 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:12Z","lastTransitionTime":"2025-12-13T03:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.616034 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.616180 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:46:12 crc kubenswrapper[4766]: E1213 03:46:12.616249 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.616331 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:46:12 crc kubenswrapper[4766]: E1213 03:46:12.616407 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:46:12 crc kubenswrapper[4766]: E1213 03:46:12.616550 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.709558 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.709596 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.709607 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.709628 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.709640 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:12Z","lastTransitionTime":"2025-12-13T03:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.813837 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.813934 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.813951 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.814032 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.814062 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:12Z","lastTransitionTime":"2025-12-13T03:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.919037 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.919111 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.919138 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.919169 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:12 crc kubenswrapper[4766]: I1213 03:46:12.919244 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:12Z","lastTransitionTime":"2025-12-13T03:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.022457 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.022523 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.022543 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.022566 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.022588 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:13Z","lastTransitionTime":"2025-12-13T03:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.125602 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.126044 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.126323 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.126549 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.126735 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:13Z","lastTransitionTime":"2025-12-13T03:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.230120 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.230877 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.231087 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.231185 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.231265 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:13Z","lastTransitionTime":"2025-12-13T03:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.334761 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.334835 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.334849 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.334878 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.334893 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:13Z","lastTransitionTime":"2025-12-13T03:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.437207 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.437676 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.437812 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.437940 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.438044 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:13Z","lastTransitionTime":"2025-12-13T03:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.540720 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.540783 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.540800 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.540822 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.540834 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:13Z","lastTransitionTime":"2025-12-13T03:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.616364 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:46:13 crc kubenswrapper[4766]: E1213 03:46:13.616561 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.643657 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.643723 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.643742 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.643764 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.643779 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:13Z","lastTransitionTime":"2025-12-13T03:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.746402 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.746478 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.746490 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.746506 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.746516 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:13Z","lastTransitionTime":"2025-12-13T03:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.850088 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.850151 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.850171 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.850191 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.850213 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:13Z","lastTransitionTime":"2025-12-13T03:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.953688 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.953749 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.953773 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.953806 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:13 crc kubenswrapper[4766]: I1213 03:46:13.953832 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:13Z","lastTransitionTime":"2025-12-13T03:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.057667 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.057755 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.057777 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.057841 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.057871 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:14Z","lastTransitionTime":"2025-12-13T03:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.161410 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.161582 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.161608 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.161637 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.161700 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:14Z","lastTransitionTime":"2025-12-13T03:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.264634 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.264693 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.264707 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.264735 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.264750 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:14Z","lastTransitionTime":"2025-12-13T03:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.367876 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.367937 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.367958 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.368017 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.368033 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:14Z","lastTransitionTime":"2025-12-13T03:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.470696 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.470737 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.470750 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.470766 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.470777 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:14Z","lastTransitionTime":"2025-12-13T03:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.573304 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.573353 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.573366 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.573387 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.573402 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:14Z","lastTransitionTime":"2025-12-13T03:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.616126 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.616273 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.616126 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:46:14 crc kubenswrapper[4766]: E1213 03:46:14.616358 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:46:14 crc kubenswrapper[4766]: E1213 03:46:14.616507 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:46:14 crc kubenswrapper[4766]: E1213 03:46:14.616627 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.629766 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84c9636d-a525-40e8-bc35-af07ecbdeafc-metrics-certs\") pod \"network-metrics-daemon-qvxrm\" (UID: \"84c9636d-a525-40e8-bc35-af07ecbdeafc\") " pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:46:14 crc kubenswrapper[4766]: E1213 03:46:14.630193 4766 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 03:46:14 crc kubenswrapper[4766]: E1213 03:46:14.630358 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84c9636d-a525-40e8-bc35-af07ecbdeafc-metrics-certs podName:84c9636d-a525-40e8-bc35-af07ecbdeafc nodeName:}" failed. No retries permitted until 2025-12-13 03:47:18.630304215 +0000 UTC m=+170.140237219 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84c9636d-a525-40e8-bc35-af07ecbdeafc-metrics-certs") pod "network-metrics-daemon-qvxrm" (UID: "84c9636d-a525-40e8-bc35-af07ecbdeafc") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.676137 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.676173 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.676182 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.676200 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.676209 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:14Z","lastTransitionTime":"2025-12-13T03:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.778910 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.778956 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.778967 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.778984 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.778996 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:14Z","lastTransitionTime":"2025-12-13T03:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.882948 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.882999 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.883009 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.883030 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.883053 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:14Z","lastTransitionTime":"2025-12-13T03:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.986022 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.986062 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.986072 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.986092 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:14 crc kubenswrapper[4766]: I1213 03:46:14.986102 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:14Z","lastTransitionTime":"2025-12-13T03:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.089410 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.089488 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.089504 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.089522 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.089539 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:15Z","lastTransitionTime":"2025-12-13T03:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.192132 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.192174 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.192185 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.192203 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.192215 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:15Z","lastTransitionTime":"2025-12-13T03:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.295381 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.295419 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.295452 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.295469 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.295479 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:15Z","lastTransitionTime":"2025-12-13T03:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.398102 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.398140 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.398152 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.398173 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.398185 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:15Z","lastTransitionTime":"2025-12-13T03:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.502892 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.502960 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.502981 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.503016 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.503032 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:15Z","lastTransitionTime":"2025-12-13T03:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.527870 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.527920 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.527934 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.527954 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.527966 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T03:46:15Z","lastTransitionTime":"2025-12-13T03:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.615466 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:46:15 crc kubenswrapper[4766]: E1213 03:46:15.615656 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.626575 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9xtl"] Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.627134 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9xtl" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.630519 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.630534 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.630720 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.630573 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.741595 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d2c8a1b8-91ea-40b5-9913-bd38b120f0a4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-k9xtl\" (UID: \"d2c8a1b8-91ea-40b5-9913-bd38b120f0a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9xtl" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.741689 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d2c8a1b8-91ea-40b5-9913-bd38b120f0a4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-k9xtl\" (UID: \"d2c8a1b8-91ea-40b5-9913-bd38b120f0a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9xtl" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.741880 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2c8a1b8-91ea-40b5-9913-bd38b120f0a4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-k9xtl\" (UID: \"d2c8a1b8-91ea-40b5-9913-bd38b120f0a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9xtl" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.741962 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d2c8a1b8-91ea-40b5-9913-bd38b120f0a4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-k9xtl\" (UID: \"d2c8a1b8-91ea-40b5-9913-bd38b120f0a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9xtl" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.742007 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2c8a1b8-91ea-40b5-9913-bd38b120f0a4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-k9xtl\" (UID: \"d2c8a1b8-91ea-40b5-9913-bd38b120f0a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9xtl" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.843260 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d2c8a1b8-91ea-40b5-9913-bd38b120f0a4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-k9xtl\" (UID: \"d2c8a1b8-91ea-40b5-9913-bd38b120f0a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9xtl" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.843360 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d2c8a1b8-91ea-40b5-9913-bd38b120f0a4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-k9xtl\" (UID: \"d2c8a1b8-91ea-40b5-9913-bd38b120f0a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9xtl" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.843412 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2c8a1b8-91ea-40b5-9913-bd38b120f0a4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-k9xtl\" (UID: \"d2c8a1b8-91ea-40b5-9913-bd38b120f0a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9xtl" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.843416 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d2c8a1b8-91ea-40b5-9913-bd38b120f0a4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-k9xtl\" (UID: \"d2c8a1b8-91ea-40b5-9913-bd38b120f0a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9xtl" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.843518 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d2c8a1b8-91ea-40b5-9913-bd38b120f0a4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-k9xtl\" (UID: \"d2c8a1b8-91ea-40b5-9913-bd38b120f0a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9xtl" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.843470 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d2c8a1b8-91ea-40b5-9913-bd38b120f0a4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-k9xtl\" (UID: \"d2c8a1b8-91ea-40b5-9913-bd38b120f0a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9xtl" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.843631 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2c8a1b8-91ea-40b5-9913-bd38b120f0a4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-k9xtl\" (UID: \"d2c8a1b8-91ea-40b5-9913-bd38b120f0a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9xtl" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.844762 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d2c8a1b8-91ea-40b5-9913-bd38b120f0a4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-k9xtl\" (UID: \"d2c8a1b8-91ea-40b5-9913-bd38b120f0a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9xtl" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.852315 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2c8a1b8-91ea-40b5-9913-bd38b120f0a4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-k9xtl\" (UID: \"d2c8a1b8-91ea-40b5-9913-bd38b120f0a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9xtl" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.863164 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2c8a1b8-91ea-40b5-9913-bd38b120f0a4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-k9xtl\" (UID: \"d2c8a1b8-91ea-40b5-9913-bd38b120f0a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9xtl" Dec 13 03:46:15 crc kubenswrapper[4766]: I1213 03:46:15.961028 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9xtl" Dec 13 03:46:16 crc kubenswrapper[4766]: I1213 03:46:16.285109 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9xtl" event={"ID":"d2c8a1b8-91ea-40b5-9913-bd38b120f0a4","Type":"ContainerStarted","Data":"44a0b5f59a1ee90a916ce3152025b95699e38b151d8d4f983bfb625060879330"} Dec 13 03:46:16 crc kubenswrapper[4766]: I1213 03:46:16.615235 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:46:16 crc kubenswrapper[4766]: E1213 03:46:16.615400 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:46:16 crc kubenswrapper[4766]: I1213 03:46:16.615650 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:46:16 crc kubenswrapper[4766]: E1213 03:46:16.615719 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:46:16 crc kubenswrapper[4766]: I1213 03:46:16.615769 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:46:16 crc kubenswrapper[4766]: E1213 03:46:16.615942 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:46:17 crc kubenswrapper[4766]: I1213 03:46:17.291178 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9xtl" event={"ID":"d2c8a1b8-91ea-40b5-9913-bd38b120f0a4","Type":"ContainerStarted","Data":"61646783bea26a4db920b934ddd5fec8a60b15da26488daf332ee840cfa80cc8"} Dec 13 03:46:17 crc kubenswrapper[4766]: I1213 03:46:17.320060 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9xtl" podStartSLOduration=84.320032806 podStartE2EDuration="1m24.320032806s" podCreationTimestamp="2025-12-13 03:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:46:17.318937393 +0000 UTC m=+108.828870357" watchObservedRunningTime="2025-12-13 03:46:17.320032806 +0000 UTC m=+108.829965790" Dec 13 03:46:17 crc kubenswrapper[4766]: I1213 03:46:17.615913 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:46:17 crc kubenswrapper[4766]: E1213 03:46:17.616118 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:46:18 crc kubenswrapper[4766]: I1213 03:46:18.616270 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:46:18 crc kubenswrapper[4766]: I1213 03:46:18.616339 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:46:18 crc kubenswrapper[4766]: I1213 03:46:18.616476 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:46:18 crc kubenswrapper[4766]: E1213 03:46:18.616552 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:46:18 crc kubenswrapper[4766]: E1213 03:46:18.616751 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:46:18 crc kubenswrapper[4766]: E1213 03:46:18.617096 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:46:18 crc kubenswrapper[4766]: I1213 03:46:18.617520 4766 scope.go:117] "RemoveContainer" containerID="aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8" Dec 13 03:46:18 crc kubenswrapper[4766]: E1213 03:46:18.617694 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2dfkj_openshift-ovn-kubernetes(c2621562-4c91-40a3-ad72-29d325404496)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" podUID="c2621562-4c91-40a3-ad72-29d325404496" Dec 13 03:46:19 crc kubenswrapper[4766]: I1213 03:46:19.615589 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:46:19 crc kubenswrapper[4766]: E1213 03:46:19.624571 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:46:20 crc kubenswrapper[4766]: I1213 03:46:20.615866 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:46:20 crc kubenswrapper[4766]: I1213 03:46:20.615925 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:46:20 crc kubenswrapper[4766]: E1213 03:46:20.616005 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:46:20 crc kubenswrapper[4766]: I1213 03:46:20.615925 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:46:20 crc kubenswrapper[4766]: E1213 03:46:20.616120 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:46:20 crc kubenswrapper[4766]: E1213 03:46:20.616145 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:46:21 crc kubenswrapper[4766]: I1213 03:46:21.615619 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:46:21 crc kubenswrapper[4766]: E1213 03:46:21.615899 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:46:22 crc kubenswrapper[4766]: I1213 03:46:22.616159 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:46:22 crc kubenswrapper[4766]: I1213 03:46:22.616163 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:46:22 crc kubenswrapper[4766]: I1213 03:46:22.616702 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:46:22 crc kubenswrapper[4766]: E1213 03:46:22.616777 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:46:22 crc kubenswrapper[4766]: E1213 03:46:22.616895 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:46:22 crc kubenswrapper[4766]: E1213 03:46:22.616699 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:46:23 crc kubenswrapper[4766]: I1213 03:46:23.615456 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:46:23 crc kubenswrapper[4766]: E1213 03:46:23.615725 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:46:24 crc kubenswrapper[4766]: I1213 03:46:24.615792 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:46:24 crc kubenswrapper[4766]: E1213 03:46:24.615981 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:46:24 crc kubenswrapper[4766]: I1213 03:46:24.616035 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:46:24 crc kubenswrapper[4766]: I1213 03:46:24.616121 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:46:24 crc kubenswrapper[4766]: E1213 03:46:24.616226 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:46:24 crc kubenswrapper[4766]: E1213 03:46:24.616478 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:46:25 crc kubenswrapper[4766]: I1213 03:46:25.615724 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:46:25 crc kubenswrapper[4766]: E1213 03:46:25.615914 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:46:26 crc kubenswrapper[4766]: I1213 03:46:26.615331 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:46:26 crc kubenswrapper[4766]: I1213 03:46:26.615359 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:46:26 crc kubenswrapper[4766]: E1213 03:46:26.615849 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:46:26 crc kubenswrapper[4766]: E1213 03:46:26.615973 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:46:26 crc kubenswrapper[4766]: I1213 03:46:26.616129 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:46:26 crc kubenswrapper[4766]: E1213 03:46:26.616356 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:46:27 crc kubenswrapper[4766]: I1213 03:46:27.615602 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:46:27 crc kubenswrapper[4766]: E1213 03:46:27.615924 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:46:28 crc kubenswrapper[4766]: I1213 03:46:28.616287 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:46:28 crc kubenswrapper[4766]: I1213 03:46:28.616349 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:46:28 crc kubenswrapper[4766]: E1213 03:46:28.616514 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:46:28 crc kubenswrapper[4766]: I1213 03:46:28.616288 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:46:28 crc kubenswrapper[4766]: E1213 03:46:28.616695 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:46:28 crc kubenswrapper[4766]: E1213 03:46:28.616835 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:46:29 crc kubenswrapper[4766]: E1213 03:46:29.388385 4766 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 13 03:46:29 crc kubenswrapper[4766]: I1213 03:46:29.616309 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:46:29 crc kubenswrapper[4766]: E1213 03:46:29.618291 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:46:29 crc kubenswrapper[4766]: E1213 03:46:29.815563 4766 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 13 03:46:30 crc kubenswrapper[4766]: I1213 03:46:30.615692 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:46:30 crc kubenswrapper[4766]: E1213 03:46:30.615992 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:46:30 crc kubenswrapper[4766]: I1213 03:46:30.616252 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:46:30 crc kubenswrapper[4766]: I1213 03:46:30.616252 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:46:30 crc kubenswrapper[4766]: E1213 03:46:30.617066 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:46:30 crc kubenswrapper[4766]: E1213 03:46:30.617215 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:46:30 crc kubenswrapper[4766]: I1213 03:46:30.617506 4766 scope.go:117] "RemoveContainer" containerID="aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8" Dec 13 03:46:30 crc kubenswrapper[4766]: E1213 03:46:30.617714 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2dfkj_openshift-ovn-kubernetes(c2621562-4c91-40a3-ad72-29d325404496)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" podUID="c2621562-4c91-40a3-ad72-29d325404496" Dec 13 03:46:31 crc kubenswrapper[4766]: I1213 03:46:31.616305 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:46:31 crc kubenswrapper[4766]: E1213 03:46:31.616615 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:46:32 crc kubenswrapper[4766]: I1213 03:46:32.616264 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:46:32 crc kubenswrapper[4766]: I1213 03:46:32.616355 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:46:32 crc kubenswrapper[4766]: I1213 03:46:32.616387 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:46:32 crc kubenswrapper[4766]: E1213 03:46:32.616565 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:46:32 crc kubenswrapper[4766]: E1213 03:46:32.616699 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:46:32 crc kubenswrapper[4766]: E1213 03:46:32.616966 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:46:33 crc kubenswrapper[4766]: I1213 03:46:33.615899 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:46:33 crc kubenswrapper[4766]: E1213 03:46:33.616162 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:46:34 crc kubenswrapper[4766]: I1213 03:46:34.615270 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:46:34 crc kubenswrapper[4766]: E1213 03:46:34.615480 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:46:34 crc kubenswrapper[4766]: I1213 03:46:34.615541 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:46:34 crc kubenswrapper[4766]: I1213 03:46:34.615604 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:46:34 crc kubenswrapper[4766]: E1213 03:46:34.615884 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:46:34 crc kubenswrapper[4766]: E1213 03:46:34.616028 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:46:34 crc kubenswrapper[4766]: E1213 03:46:34.817974 4766 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 13 03:46:35 crc kubenswrapper[4766]: I1213 03:46:35.616215 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:46:35 crc kubenswrapper[4766]: E1213 03:46:35.616387 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:46:36 crc kubenswrapper[4766]: I1213 03:46:36.615808 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:46:36 crc kubenswrapper[4766]: I1213 03:46:36.615939 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:46:36 crc kubenswrapper[4766]: I1213 03:46:36.615808 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:46:36 crc kubenswrapper[4766]: E1213 03:46:36.616020 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:46:36 crc kubenswrapper[4766]: E1213 03:46:36.616166 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:46:36 crc kubenswrapper[4766]: E1213 03:46:36.616361 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:46:37 crc kubenswrapper[4766]: I1213 03:46:37.616265 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:46:37 crc kubenswrapper[4766]: E1213 03:46:37.616655 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:46:38 crc kubenswrapper[4766]: I1213 03:46:38.378330 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6n4vc_b724d1e1-9ded-434e-b852-f5233f27ef32/kube-multus/1.log" Dec 13 03:46:38 crc kubenswrapper[4766]: I1213 03:46:38.379010 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6n4vc_b724d1e1-9ded-434e-b852-f5233f27ef32/kube-multus/0.log" Dec 13 03:46:38 crc kubenswrapper[4766]: I1213 03:46:38.379081 4766 generic.go:334] "Generic (PLEG): container finished" podID="b724d1e1-9ded-434e-b852-f5233f27ef32" containerID="40ada8692e77f3fdcfb7af0831f398beba03791a2e7286360296b007c809a329" exitCode=1 Dec 13 03:46:38 crc kubenswrapper[4766]: I1213 03:46:38.379125 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6n4vc" event={"ID":"b724d1e1-9ded-434e-b852-f5233f27ef32","Type":"ContainerDied","Data":"40ada8692e77f3fdcfb7af0831f398beba03791a2e7286360296b007c809a329"} Dec 13 03:46:38 crc kubenswrapper[4766]: I1213 03:46:38.379186 4766 scope.go:117] "RemoveContainer" containerID="ccb3c6d18667029c056e662a0ee12c3a3afcb3935e1e2a789458fe57a7dfd3b0" Dec 13 03:46:38 crc kubenswrapper[4766]: I1213 03:46:38.379846 4766 scope.go:117] "RemoveContainer" containerID="40ada8692e77f3fdcfb7af0831f398beba03791a2e7286360296b007c809a329" Dec 13 03:46:38 crc kubenswrapper[4766]: E1213 03:46:38.380122 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-6n4vc_openshift-multus(b724d1e1-9ded-434e-b852-f5233f27ef32)\"" pod="openshift-multus/multus-6n4vc" podUID="b724d1e1-9ded-434e-b852-f5233f27ef32" Dec 13 03:46:38 crc kubenswrapper[4766]: I1213 03:46:38.616103 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:46:38 crc kubenswrapper[4766]: I1213 03:46:38.616213 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:46:38 crc kubenswrapper[4766]: E1213 03:46:38.616332 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:46:38 crc kubenswrapper[4766]: E1213 03:46:38.616415 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:46:38 crc kubenswrapper[4766]: I1213 03:46:38.616566 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:46:38 crc kubenswrapper[4766]: E1213 03:46:38.616669 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:46:39 crc kubenswrapper[4766]: I1213 03:46:39.385651 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6n4vc_b724d1e1-9ded-434e-b852-f5233f27ef32/kube-multus/1.log" Dec 13 03:46:39 crc kubenswrapper[4766]: I1213 03:46:39.615847 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:46:39 crc kubenswrapper[4766]: E1213 03:46:39.617988 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:46:39 crc kubenswrapper[4766]: E1213 03:46:39.819211 4766 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 13 03:46:40 crc kubenswrapper[4766]: I1213 03:46:40.616255 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:46:40 crc kubenswrapper[4766]: I1213 03:46:40.616503 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:46:40 crc kubenswrapper[4766]: E1213 03:46:40.616583 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:46:40 crc kubenswrapper[4766]: I1213 03:46:40.616279 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:46:40 crc kubenswrapper[4766]: E1213 03:46:40.616754 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:46:40 crc kubenswrapper[4766]: E1213 03:46:40.616951 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:46:41 crc kubenswrapper[4766]: I1213 03:46:41.615785 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:46:41 crc kubenswrapper[4766]: E1213 03:46:41.615995 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:46:42 crc kubenswrapper[4766]: I1213 03:46:42.615665 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:46:42 crc kubenswrapper[4766]: I1213 03:46:42.615777 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:46:42 crc kubenswrapper[4766]: E1213 03:46:42.615843 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:46:42 crc kubenswrapper[4766]: I1213 03:46:42.615674 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:46:42 crc kubenswrapper[4766]: E1213 03:46:42.615996 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:46:42 crc kubenswrapper[4766]: E1213 03:46:42.616121 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:46:43 crc kubenswrapper[4766]: I1213 03:46:43.615646 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:46:43 crc kubenswrapper[4766]: E1213 03:46:43.615848 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:46:44 crc kubenswrapper[4766]: I1213 03:46:44.616108 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:46:44 crc kubenswrapper[4766]: I1213 03:46:44.616234 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:46:44 crc kubenswrapper[4766]: I1213 03:46:44.616148 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:46:44 crc kubenswrapper[4766]: E1213 03:46:44.616387 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:46:44 crc kubenswrapper[4766]: E1213 03:46:44.616815 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:46:44 crc kubenswrapper[4766]: E1213 03:46:44.616997 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:46:44 crc kubenswrapper[4766]: E1213 03:46:44.821154 4766 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 13 03:46:45 crc kubenswrapper[4766]: I1213 03:46:45.615836 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:46:45 crc kubenswrapper[4766]: E1213 03:46:45.616091 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:46:45 crc kubenswrapper[4766]: I1213 03:46:45.617146 4766 scope.go:117] "RemoveContainer" containerID="aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8" Dec 13 03:46:46 crc kubenswrapper[4766]: I1213 03:46:46.419351 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2dfkj_c2621562-4c91-40a3-ad72-29d325404496/ovnkube-controller/3.log" Dec 13 03:46:46 crc kubenswrapper[4766]: I1213 03:46:46.422961 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" event={"ID":"c2621562-4c91-40a3-ad72-29d325404496","Type":"ContainerStarted","Data":"fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0"} Dec 13 03:46:46 crc kubenswrapper[4766]: I1213 03:46:46.423586 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:46:46 crc kubenswrapper[4766]: I1213 03:46:46.463292 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" podStartSLOduration=112.463269398 podStartE2EDuration="1m52.463269398s" podCreationTimestamp="2025-12-13 03:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:46:46.461822226 +0000 UTC m=+137.971755210" watchObservedRunningTime="2025-12-13 03:46:46.463269398 +0000 UTC m=+137.973202362" Dec 13 03:46:46 crc kubenswrapper[4766]: I1213 03:46:46.616069 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:46:46 crc kubenswrapper[4766]: I1213 03:46:46.616116 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:46:46 crc kubenswrapper[4766]: I1213 03:46:46.616069 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:46:46 crc kubenswrapper[4766]: E1213 03:46:46.616224 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:46:46 crc kubenswrapper[4766]: E1213 03:46:46.616294 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:46:46 crc kubenswrapper[4766]: E1213 03:46:46.616396 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:46:46 crc kubenswrapper[4766]: I1213 03:46:46.764553 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-qvxrm"] Dec 13 03:46:47 crc kubenswrapper[4766]: I1213 03:46:47.426206 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:46:47 crc kubenswrapper[4766]: E1213 03:46:47.426666 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:46:47 crc kubenswrapper[4766]: I1213 03:46:47.615768 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:46:47 crc kubenswrapper[4766]: E1213 03:46:47.616102 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:46:48 crc kubenswrapper[4766]: I1213 03:46:48.615840 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:46:48 crc kubenswrapper[4766]: I1213 03:46:48.615876 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:46:48 crc kubenswrapper[4766]: E1213 03:46:48.616081 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:46:48 crc kubenswrapper[4766]: E1213 03:46:48.616191 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:46:49 crc kubenswrapper[4766]: I1213 03:46:49.615805 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:46:49 crc kubenswrapper[4766]: I1213 03:46:49.617122 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:46:49 crc kubenswrapper[4766]: I1213 03:46:49.617344 4766 scope.go:117] "RemoveContainer" containerID="40ada8692e77f3fdcfb7af0831f398beba03791a2e7286360296b007c809a329" Dec 13 03:46:49 crc kubenswrapper[4766]: E1213 03:46:49.617364 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:46:49 crc kubenswrapper[4766]: E1213 03:46:49.617507 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:46:49 crc kubenswrapper[4766]: E1213 03:46:49.823036 4766 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 13 03:46:50 crc kubenswrapper[4766]: I1213 03:46:50.437893 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6n4vc_b724d1e1-9ded-434e-b852-f5233f27ef32/kube-multus/1.log" Dec 13 03:46:50 crc kubenswrapper[4766]: I1213 03:46:50.437958 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6n4vc" event={"ID":"b724d1e1-9ded-434e-b852-f5233f27ef32","Type":"ContainerStarted","Data":"934184c50f0ad116964e4b6847b2598415944b7ff00877ec16ecaeb4f28a00b1"} Dec 13 03:46:50 crc kubenswrapper[4766]: I1213 03:46:50.615571 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:46:50 crc kubenswrapper[4766]: I1213 03:46:50.615653 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:46:50 crc kubenswrapper[4766]: E1213 03:46:50.615873 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:46:50 crc kubenswrapper[4766]: E1213 03:46:50.616022 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:46:51 crc kubenswrapper[4766]: I1213 03:46:51.615995 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:46:51 crc kubenswrapper[4766]: I1213 03:46:51.616080 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:46:51 crc kubenswrapper[4766]: E1213 03:46:51.616185 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:46:51 crc kubenswrapper[4766]: E1213 03:46:51.616351 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:46:52 crc kubenswrapper[4766]: I1213 03:46:52.616135 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:46:52 crc kubenswrapper[4766]: I1213 03:46:52.616211 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:46:52 crc kubenswrapper[4766]: E1213 03:46:52.616337 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:46:52 crc kubenswrapper[4766]: E1213 03:46:52.616415 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:46:53 crc kubenswrapper[4766]: I1213 03:46:53.615749 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:46:53 crc kubenswrapper[4766]: E1213 03:46:53.615951 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 03:46:53 crc kubenswrapper[4766]: I1213 03:46:53.615958 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:46:53 crc kubenswrapper[4766]: E1213 03:46:53.616250 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvxrm" podUID="84c9636d-a525-40e8-bc35-af07ecbdeafc" Dec 13 03:46:54 crc kubenswrapper[4766]: I1213 03:46:54.615472 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:46:54 crc kubenswrapper[4766]: I1213 03:46:54.615515 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:46:54 crc kubenswrapper[4766]: E1213 03:46:54.615648 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 03:46:54 crc kubenswrapper[4766]: E1213 03:46:54.615759 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 03:46:55 crc kubenswrapper[4766]: I1213 03:46:55.615908 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:46:55 crc kubenswrapper[4766]: I1213 03:46:55.616203 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:46:55 crc kubenswrapper[4766]: I1213 03:46:55.620211 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 13 03:46:55 crc kubenswrapper[4766]: I1213 03:46:55.620412 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 13 03:46:55 crc kubenswrapper[4766]: I1213 03:46:55.620812 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 13 03:46:55 crc kubenswrapper[4766]: I1213 03:46:55.620866 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.279738 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.326524 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-754n6"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.327337 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.330526 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.330671 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.333060 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rqt6m"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.333973 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rqt6m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.335097 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.335179 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.335193 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.338416 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2lcjl"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.339195 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-2lcjl" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.355199 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.356773 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.357614 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.357883 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.361209 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.367170 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kw69h"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.368127 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-kw69h" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.370348 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.370834 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.371090 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-l2gzj"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.371980 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.372102 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.372706 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.372728 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.372961 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.397344 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wrg86"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.398178 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-wrg86" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.398878 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.400601 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tzfzc"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.398907 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.399057 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.399320 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.401340 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.401468 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.401626 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.401759 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.405533 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.408705 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4nlxj"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.409343 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nlxj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.411381 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.411703 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tzfzc" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.412231 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.412469 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.412571 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.413096 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.414472 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.414572 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.414622 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.414866 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.416488 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-x546d"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.417085 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x546d" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.421011 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.422279 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.422731 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.424047 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-tjszx"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.423718 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.423943 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.426515 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tjszx" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.428242 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.428981 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.429463 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.429633 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.430009 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.430216 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.430386 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.430635 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.430675 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.430736 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.430864 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.430954 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.430987 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.430902 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.431233 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2qfj"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.431374 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.431532 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.431717 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.431760 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.431851 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.431902 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.432000 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.432065 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2qfj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.432111 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.432395 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.433650 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.433887 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.452714 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.457835 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nzfkv"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.458483 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-62jdj"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.458543 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba-audit\") pod \"apiserver-76f77b778f-754n6\" (UID: \"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba\") " pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.458696 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba-audit-dir\") pod \"apiserver-76f77b778f-754n6\" (UID: \"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba\") " pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.458750 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.458787 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-audit-policies\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.458848 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/434785a5-a04b-42f1-8f70-d12238df0eff-serving-cert\") pod \"authentication-operator-69f744f599-kw69h\" (UID: \"434785a5-a04b-42f1-8f70-d12238df0eff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kw69h" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.458880 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7t5r\" (UniqueName: \"kubernetes.io/projected/434785a5-a04b-42f1-8f70-d12238df0eff-kube-api-access-w7t5r\") pod \"authentication-operator-69f744f599-kw69h\" (UID: \"434785a5-a04b-42f1-8f70-d12238df0eff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kw69h" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.458938 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.458978 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t5cj\" (UniqueName: \"kubernetes.io/projected/d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba-kube-api-access-5t5cj\") pod \"apiserver-76f77b778f-754n6\" (UID: \"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba\") " pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.459012 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.459056 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.459077 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/654b71d2-08e7-4c5a-b27c-a6fb253b845c-serving-cert\") pod \"console-operator-58897d9998-2lcjl\" (UID: \"654b71d2-08e7-4c5a-b27c-a6fb253b845c\") " pod="openshift-console-operator/console-operator-58897d9998-2lcjl" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.459130 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.459162 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba-node-pullsecrets\") pod \"apiserver-76f77b778f-754n6\" (UID: \"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba\") " pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.459191 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.459244 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b288e53-1206-4832-8d01-94dd9d33f9dd-config\") pod \"machine-api-operator-5694c8668f-wrg86\" (UID: \"7b288e53-1206-4832-8d01-94dd9d33f9dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wrg86" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.459287 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/434785a5-a04b-42f1-8f70-d12238df0eff-config\") pod \"authentication-operator-69f744f599-kw69h\" (UID: \"434785a5-a04b-42f1-8f70-d12238df0eff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kw69h" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.459321 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.459362 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.459409 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba-etcd-serving-ca\") pod \"apiserver-76f77b778f-754n6\" (UID: \"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba\") " pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.459475 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/434785a5-a04b-42f1-8f70-d12238df0eff-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kw69h\" (UID: \"434785a5-a04b-42f1-8f70-d12238df0eff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kw69h" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.459506 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs4q6\" (UniqueName: \"kubernetes.io/projected/7b288e53-1206-4832-8d01-94dd9d33f9dd-kube-api-access-zs4q6\") pod \"machine-api-operator-5694c8668f-wrg86\" (UID: \"7b288e53-1206-4832-8d01-94dd9d33f9dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wrg86" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.459014 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-62jdj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.459571 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nzfkv" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.459537 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rljhx\" (UniqueName: \"kubernetes.io/projected/768b7312-75b7-4135-92ef-dbf8a089efb3-kube-api-access-rljhx\") pod \"cluster-samples-operator-665b6dd947-rqt6m\" (UID: \"768b7312-75b7-4135-92ef-dbf8a089efb3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rqt6m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.460112 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/434785a5-a04b-42f1-8f70-d12238df0eff-service-ca-bundle\") pod \"authentication-operator-69f744f599-kw69h\" (UID: \"434785a5-a04b-42f1-8f70-d12238df0eff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kw69h" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.460254 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/654b71d2-08e7-4c5a-b27c-a6fb253b845c-trusted-ca\") pod \"console-operator-58897d9998-2lcjl\" (UID: \"654b71d2-08e7-4c5a-b27c-a6fb253b845c\") " pod="openshift-console-operator/console-operator-58897d9998-2lcjl" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.460311 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7b288e53-1206-4832-8d01-94dd9d33f9dd-images\") pod \"machine-api-operator-5694c8668f-wrg86\" (UID: \"7b288e53-1206-4832-8d01-94dd9d33f9dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wrg86" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.460350 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b288e53-1206-4832-8d01-94dd9d33f9dd-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wrg86\" (UID: \"7b288e53-1206-4832-8d01-94dd9d33f9dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wrg86" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.460414 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/768b7312-75b7-4135-92ef-dbf8a089efb3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rqt6m\" (UID: \"768b7312-75b7-4135-92ef-dbf8a089efb3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rqt6m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.460463 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-audit-dir\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.460499 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js9d4\" (UniqueName: \"kubernetes.io/projected/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-kube-api-access-js9d4\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.460533 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba-trusted-ca-bundle\") pod \"apiserver-76f77b778f-754n6\" (UID: \"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba\") " pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.460554 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba-encryption-config\") pod \"apiserver-76f77b778f-754n6\" (UID: \"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba\") " pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.460590 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.460642 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/654b71d2-08e7-4c5a-b27c-a6fb253b845c-config\") pod \"console-operator-58897d9998-2lcjl\" (UID: \"654b71d2-08e7-4c5a-b27c-a6fb253b845c\") " pod="openshift-console-operator/console-operator-58897d9998-2lcjl" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.460678 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.460704 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba-serving-cert\") pod \"apiserver-76f77b778f-754n6\" (UID: \"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba\") " pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.460748 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.460774 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba-config\") pod \"apiserver-76f77b778f-754n6\" (UID: \"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba\") " pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.460812 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba-image-import-ca\") pod \"apiserver-76f77b778f-754n6\" (UID: \"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba\") " pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.460836 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba-etcd-client\") pod \"apiserver-76f77b778f-754n6\" (UID: \"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba\") " pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.460858 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.460913 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnfk2\" (UniqueName: \"kubernetes.io/projected/654b71d2-08e7-4c5a-b27c-a6fb253b845c-kube-api-access-wnfk2\") pod \"console-operator-58897d9998-2lcjl\" (UID: \"654b71d2-08e7-4c5a-b27c-a6fb253b845c\") " pod="openshift-console-operator/console-operator-58897d9998-2lcjl" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.467947 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.477235 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.490509 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.493738 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8rm5b"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.494292 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-wvqbr"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.495087 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvqbr" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.495381 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8rm5b" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.495943 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.496126 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.496134 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.496445 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.496443 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.497281 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.497538 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k7zxp"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.497957 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k7zxp" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.498453 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rgvbd"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.499349 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-drbvl"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.499933 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-drbvl" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.500072 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kxzv9"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.500594 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kxzv9" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.500603 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.500968 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rgvbd" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.503136 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.504367 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.504800 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.505961 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.506010 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.506034 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.506503 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.506761 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.506935 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.507062 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.507210 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.508210 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.508408 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.508411 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.508507 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.508715 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.508833 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.510668 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d47ng"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.511382 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rw6wb"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.511938 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29426625-n2qvr"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.512382 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426625-n2qvr" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.512690 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rw6wb" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.512743 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d47ng" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.512940 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwjph"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.513577 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwjph" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.514566 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l9g79"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.514976 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-zchph"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.515770 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zchph" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.516102 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l9g79" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.516744 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-dh99j"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.516754 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.517325 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-zxh2f"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.517734 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vpns8"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.518344 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vpns8" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.518690 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.518805 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-zxh2f" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.519383 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.519555 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.518755 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-dh99j" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.519418 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.521191 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.521683 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.522500 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.523176 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.524086 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdjlm"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.524721 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-pfb6w"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.525204 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-j864x"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.551577 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.556588 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pfb6w" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.557108 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdjlm" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.559499 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.560164 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j864x" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.560252 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.560824 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.561208 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.560829 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.567016 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnfk2\" (UniqueName: \"kubernetes.io/projected/654b71d2-08e7-4c5a-b27c-a6fb253b845c-kube-api-access-wnfk2\") pod \"console-operator-58897d9998-2lcjl\" (UID: \"654b71d2-08e7-4c5a-b27c-a6fb253b845c\") " pod="openshift-console-operator/console-operator-58897d9998-2lcjl" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.567142 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe3fb891-3c52-49fc-9b5c-22c7dbde195b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tzfzc\" (UID: \"fe3fb891-3c52-49fc-9b5c-22c7dbde195b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tzfzc" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.567230 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13f10968-6869-4ee6-975a-29260f3914ba-audit-dir\") pod \"apiserver-7bbb656c7d-m2vrm\" (UID: \"13f10968-6869-4ee6-975a-29260f3914ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.567315 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/13f10968-6869-4ee6-975a-29260f3914ba-etcd-client\") pod \"apiserver-7bbb656c7d-m2vrm\" (UID: \"13f10968-6869-4ee6-975a-29260f3914ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.567400 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce7d03d7-4dcc-4d25-909d-a0db72482053-auth-proxy-config\") pod \"machine-approver-56656f9798-wvqbr\" (UID: \"ce7d03d7-4dcc-4d25-909d-a0db72482053\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvqbr" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.567553 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b486f774-cede-460b-bc98-a89766288e88-serving-cert\") pod \"openshift-config-operator-7777fb866f-4nlxj\" (UID: \"b486f774-cede-460b-bc98-a89766288e88\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nlxj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.567668 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba-audit\") pod \"apiserver-76f77b778f-754n6\" (UID: \"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba\") " pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.567762 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba-audit-dir\") pod \"apiserver-76f77b778f-754n6\" (UID: \"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba\") " pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.569133 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba-audit\") pod \"apiserver-76f77b778f-754n6\" (UID: \"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba\") " pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.569176 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba-audit-dir\") pod \"apiserver-76f77b778f-754n6\" (UID: \"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba\") " pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.571569 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.574650 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce7d03d7-4dcc-4d25-909d-a0db72482053-config\") pod \"machine-approver-56656f9798-wvqbr\" (UID: \"ce7d03d7-4dcc-4d25-909d-a0db72482053\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvqbr" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.575601 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.575696 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-audit-policies\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.571628 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.572868 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.575789 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a26b74bc-41ac-4d65-b833-cb269d199ddd-client-ca\") pod \"route-controller-manager-6576b87f9c-kxg69\" (UID: \"a26b74bc-41ac-4d65-b833-cb269d199ddd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.608681 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a71b41ac-3909-4658-8b06-703e8cdba663-profile-collector-cert\") pod \"catalog-operator-68c6474976-k7zxp\" (UID: \"a71b41ac-3909-4658-8b06-703e8cdba663\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k7zxp" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.608740 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59e373cc-61e8-4c1a-9734-6a2120179e36-config\") pod \"controller-manager-879f6c89f-nzfkv\" (UID: \"59e373cc-61e8-4c1a-9734-6a2120179e36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nzfkv" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.608763 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c019342-bb58-4587-b4fb-24f0641905b1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8rm5b\" (UID: \"6c019342-bb58-4587-b4fb-24f0641905b1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8rm5b" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.608785 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d9ebba92-4a0b-494f-995e-09ebdcd65a6e-webhook-cert\") pod \"packageserver-d55dfcdfc-rgvbd\" (UID: \"d9ebba92-4a0b-494f-995e-09ebdcd65a6e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rgvbd" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.608810 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/434785a5-a04b-42f1-8f70-d12238df0eff-serving-cert\") pod \"authentication-operator-69f744f599-kw69h\" (UID: \"434785a5-a04b-42f1-8f70-d12238df0eff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kw69h" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.608829 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7t5r\" (UniqueName: \"kubernetes.io/projected/434785a5-a04b-42f1-8f70-d12238df0eff-kube-api-access-w7t5r\") pod \"authentication-operator-69f744f599-kw69h\" (UID: \"434785a5-a04b-42f1-8f70-d12238df0eff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kw69h" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.608847 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59e373cc-61e8-4c1a-9734-6a2120179e36-serving-cert\") pod \"controller-manager-879f6c89f-nzfkv\" (UID: \"59e373cc-61e8-4c1a-9734-6a2120179e36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nzfkv" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.608866 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ca84cfba-243f-4342-9550-db6256f0c2f8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-drbvl\" (UID: \"ca84cfba-243f-4342-9550-db6256f0c2f8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-drbvl" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.608886 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbjl5\" (UniqueName: \"kubernetes.io/projected/41447db8-0fe6-4772-8bbb-12a68ba33f1e-kube-api-access-xbjl5\") pod \"downloads-7954f5f757-tjszx\" (UID: \"41447db8-0fe6-4772-8bbb-12a68ba33f1e\") " pod="openshift-console/downloads-7954f5f757-tjszx" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.608903 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d9ebba92-4a0b-494f-995e-09ebdcd65a6e-apiservice-cert\") pod \"packageserver-d55dfcdfc-rgvbd\" (UID: \"d9ebba92-4a0b-494f-995e-09ebdcd65a6e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rgvbd" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.608930 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.608950 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t5cj\" (UniqueName: \"kubernetes.io/projected/d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba-kube-api-access-5t5cj\") pod \"apiserver-76f77b778f-754n6\" (UID: \"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba\") " pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.608968 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f9d412f-3812-4b13-8ba6-e8b79a5cb6f2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-b2qfj\" (UID: \"8f9d412f-3812-4b13-8ba6-e8b79a5cb6f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2qfj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.608990 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6hj5\" (UniqueName: \"kubernetes.io/projected/59e373cc-61e8-4c1a-9734-6a2120179e36-kube-api-access-w6hj5\") pod \"controller-manager-879f6c89f-nzfkv\" (UID: \"59e373cc-61e8-4c1a-9734-6a2120179e36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nzfkv" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609007 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ca84cfba-243f-4342-9550-db6256f0c2f8-srv-cert\") pod \"olm-operator-6b444d44fb-drbvl\" (UID: \"ca84cfba-243f-4342-9550-db6256f0c2f8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-drbvl" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609033 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609049 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59e373cc-61e8-4c1a-9734-6a2120179e36-client-ca\") pod \"controller-manager-879f6c89f-nzfkv\" (UID: \"59e373cc-61e8-4c1a-9734-6a2120179e36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nzfkv" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609068 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b486f774-cede-460b-bc98-a89766288e88-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4nlxj\" (UID: \"b486f774-cede-460b-bc98-a89766288e88\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nlxj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609089 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9sfm\" (UniqueName: \"kubernetes.io/projected/b486f774-cede-460b-bc98-a89766288e88-kube-api-access-w9sfm\") pod \"openshift-config-operator-7777fb866f-4nlxj\" (UID: \"b486f774-cede-460b-bc98-a89766288e88\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nlxj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609110 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/654b71d2-08e7-4c5a-b27c-a6fb253b845c-serving-cert\") pod \"console-operator-58897d9998-2lcjl\" (UID: \"654b71d2-08e7-4c5a-b27c-a6fb253b845c\") " pod="openshift-console-operator/console-operator-58897d9998-2lcjl" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609127 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609144 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba-node-pullsecrets\") pod \"apiserver-76f77b778f-754n6\" (UID: \"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba\") " pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609160 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdnf4\" (UniqueName: \"kubernetes.io/projected/ca84cfba-243f-4342-9550-db6256f0c2f8-kube-api-access-cdnf4\") pod \"olm-operator-6b444d44fb-drbvl\" (UID: \"ca84cfba-243f-4342-9550-db6256f0c2f8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-drbvl" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609179 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcb2f\" (UniqueName: \"kubernetes.io/projected/867694b5-7481-462e-b143-379b53b0ad6e-kube-api-access-tcb2f\") pod \"migrator-59844c95c7-kxzv9\" (UID: \"867694b5-7481-462e-b143-379b53b0ad6e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kxzv9" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609200 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609219 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c019342-bb58-4587-b4fb-24f0641905b1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8rm5b\" (UID: \"6c019342-bb58-4587-b4fb-24f0641905b1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8rm5b" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609240 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf994913-919d-4268-b606-5b0286e721d0-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-62jdj\" (UID: \"cf994913-919d-4268-b606-5b0286e721d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-62jdj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609256 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a71b41ac-3909-4658-8b06-703e8cdba663-srv-cert\") pod \"catalog-operator-68c6474976-k7zxp\" (UID: \"a71b41ac-3909-4658-8b06-703e8cdba663\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k7zxp" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609346 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b288e53-1206-4832-8d01-94dd9d33f9dd-config\") pod \"machine-api-operator-5694c8668f-wrg86\" (UID: \"7b288e53-1206-4832-8d01-94dd9d33f9dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wrg86" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609384 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6sxv\" (UniqueName: \"kubernetes.io/projected/fc132ae0-d7bd-4064-89ad-4f9a57e76369-kube-api-access-g6sxv\") pod \"console-f9d7485db-x546d\" (UID: \"fc132ae0-d7bd-4064-89ad-4f9a57e76369\") " pod="openshift-console/console-f9d7485db-x546d" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609420 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/13f10968-6869-4ee6-975a-29260f3914ba-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-m2vrm\" (UID: \"13f10968-6869-4ee6-975a-29260f3914ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609490 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a26b74bc-41ac-4d65-b833-cb269d199ddd-serving-cert\") pod \"route-controller-manager-6576b87f9c-kxg69\" (UID: \"a26b74bc-41ac-4d65-b833-cb269d199ddd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609509 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/434785a5-a04b-42f1-8f70-d12238df0eff-config\") pod \"authentication-operator-69f744f599-kw69h\" (UID: \"434785a5-a04b-42f1-8f70-d12238df0eff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kw69h" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609526 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609542 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13f10968-6869-4ee6-975a-29260f3914ba-audit-policies\") pod \"apiserver-7bbb656c7d-m2vrm\" (UID: \"13f10968-6869-4ee6-975a-29260f3914ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609558 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f9d412f-3812-4b13-8ba6-e8b79a5cb6f2-config\") pod \"kube-apiserver-operator-766d6c64bb-b2qfj\" (UID: \"8f9d412f-3812-4b13-8ba6-e8b79a5cb6f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2qfj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609581 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pkzd\" (UniqueName: \"kubernetes.io/projected/ce7d03d7-4dcc-4d25-909d-a0db72482053-kube-api-access-6pkzd\") pod \"machine-approver-56656f9798-wvqbr\" (UID: \"ce7d03d7-4dcc-4d25-909d-a0db72482053\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvqbr" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609603 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609619 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba-etcd-serving-ca\") pod \"apiserver-76f77b778f-754n6\" (UID: \"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba\") " pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609634 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ggpw\" (UniqueName: \"kubernetes.io/projected/fe3fb891-3c52-49fc-9b5c-22c7dbde195b-kube-api-access-2ggpw\") pod \"openshift-apiserver-operator-796bbdcf4f-tzfzc\" (UID: \"fe3fb891-3c52-49fc-9b5c-22c7dbde195b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tzfzc" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609654 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/434785a5-a04b-42f1-8f70-d12238df0eff-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kw69h\" (UID: \"434785a5-a04b-42f1-8f70-d12238df0eff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kw69h" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609675 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs4q6\" (UniqueName: \"kubernetes.io/projected/7b288e53-1206-4832-8d01-94dd9d33f9dd-kube-api-access-zs4q6\") pod \"machine-api-operator-5694c8668f-wrg86\" (UID: \"7b288e53-1206-4832-8d01-94dd9d33f9dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wrg86" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609709 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59e373cc-61e8-4c1a-9734-6a2120179e36-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nzfkv\" (UID: \"59e373cc-61e8-4c1a-9734-6a2120179e36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nzfkv" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609740 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fc132ae0-d7bd-4064-89ad-4f9a57e76369-console-serving-cert\") pod \"console-f9d7485db-x546d\" (UID: \"fc132ae0-d7bd-4064-89ad-4f9a57e76369\") " pod="openshift-console/console-f9d7485db-x546d" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609762 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fc132ae0-d7bd-4064-89ad-4f9a57e76369-console-oauth-config\") pod \"console-f9d7485db-x546d\" (UID: \"fc132ae0-d7bd-4064-89ad-4f9a57e76369\") " pod="openshift-console/console-f9d7485db-x546d" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609784 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/13f10968-6869-4ee6-975a-29260f3914ba-encryption-config\") pod \"apiserver-7bbb656c7d-m2vrm\" (UID: \"13f10968-6869-4ee6-975a-29260f3914ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609800 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfbpr\" (UniqueName: \"kubernetes.io/projected/13f10968-6869-4ee6-975a-29260f3914ba-kube-api-access-wfbpr\") pod \"apiserver-7bbb656c7d-m2vrm\" (UID: \"13f10968-6869-4ee6-975a-29260f3914ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609821 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rljhx\" (UniqueName: \"kubernetes.io/projected/768b7312-75b7-4135-92ef-dbf8a089efb3-kube-api-access-rljhx\") pod \"cluster-samples-operator-665b6dd947-rqt6m\" (UID: \"768b7312-75b7-4135-92ef-dbf8a089efb3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rqt6m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609849 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkhgc\" (UniqueName: \"kubernetes.io/projected/d9ebba92-4a0b-494f-995e-09ebdcd65a6e-kube-api-access-zkhgc\") pod \"packageserver-d55dfcdfc-rgvbd\" (UID: \"d9ebba92-4a0b-494f-995e-09ebdcd65a6e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rgvbd" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609887 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf994913-919d-4268-b606-5b0286e721d0-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-62jdj\" (UID: \"cf994913-919d-4268-b606-5b0286e721d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-62jdj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609908 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/434785a5-a04b-42f1-8f70-d12238df0eff-service-ca-bundle\") pod \"authentication-operator-69f744f599-kw69h\" (UID: \"434785a5-a04b-42f1-8f70-d12238df0eff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kw69h" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609939 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/654b71d2-08e7-4c5a-b27c-a6fb253b845c-trusted-ca\") pod \"console-operator-58897d9998-2lcjl\" (UID: \"654b71d2-08e7-4c5a-b27c-a6fb253b845c\") " pod="openshift-console-operator/console-operator-58897d9998-2lcjl" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609956 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe3fb891-3c52-49fc-9b5c-22c7dbde195b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tzfzc\" (UID: \"fe3fb891-3c52-49fc-9b5c-22c7dbde195b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tzfzc" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609974 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fc132ae0-d7bd-4064-89ad-4f9a57e76369-oauth-serving-cert\") pod \"console-f9d7485db-x546d\" (UID: \"fc132ae0-d7bd-4064-89ad-4f9a57e76369\") " pod="openshift-console/console-f9d7485db-x546d" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.609995 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7b288e53-1206-4832-8d01-94dd9d33f9dd-images\") pod \"machine-api-operator-5694c8668f-wrg86\" (UID: \"7b288e53-1206-4832-8d01-94dd9d33f9dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wrg86" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.610011 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b288e53-1206-4832-8d01-94dd9d33f9dd-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wrg86\" (UID: \"7b288e53-1206-4832-8d01-94dd9d33f9dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wrg86" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.610036 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/768b7312-75b7-4135-92ef-dbf8a089efb3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rqt6m\" (UID: \"768b7312-75b7-4135-92ef-dbf8a089efb3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rqt6m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.610055 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-audit-dir\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.610073 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js9d4\" (UniqueName: \"kubernetes.io/projected/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-kube-api-access-js9d4\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.610103 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba-trusted-ca-bundle\") pod \"apiserver-76f77b778f-754n6\" (UID: \"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba\") " pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.610129 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba-encryption-config\") pod \"apiserver-76f77b778f-754n6\" (UID: \"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba\") " pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.610153 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.610177 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ce7d03d7-4dcc-4d25-909d-a0db72482053-machine-approver-tls\") pod \"machine-approver-56656f9798-wvqbr\" (UID: \"ce7d03d7-4dcc-4d25-909d-a0db72482053\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvqbr" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.610220 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/654b71d2-08e7-4c5a-b27c-a6fb253b845c-config\") pod \"console-operator-58897d9998-2lcjl\" (UID: \"654b71d2-08e7-4c5a-b27c-a6fb253b845c\") " pod="openshift-console-operator/console-operator-58897d9998-2lcjl" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.610238 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.610256 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba-serving-cert\") pod \"apiserver-76f77b778f-754n6\" (UID: \"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba\") " pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.610271 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fc132ae0-d7bd-4064-89ad-4f9a57e76369-service-ca\") pod \"console-f9d7485db-x546d\" (UID: \"fc132ae0-d7bd-4064-89ad-4f9a57e76369\") " pod="openshift-console/console-f9d7485db-x546d" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.610290 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13f10968-6869-4ee6-975a-29260f3914ba-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-m2vrm\" (UID: \"13f10968-6869-4ee6-975a-29260f3914ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.610314 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.610337 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba-config\") pod \"apiserver-76f77b778f-754n6\" (UID: \"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba\") " pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.610371 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba-image-import-ca\") pod \"apiserver-76f77b778f-754n6\" (UID: \"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba\") " pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.610387 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fc132ae0-d7bd-4064-89ad-4f9a57e76369-console-config\") pod \"console-f9d7485db-x546d\" (UID: \"fc132ae0-d7bd-4064-89ad-4f9a57e76369\") " pod="openshift-console/console-f9d7485db-x546d" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.610402 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a26b74bc-41ac-4d65-b833-cb269d199ddd-config\") pod \"route-controller-manager-6576b87f9c-kxg69\" (UID: \"a26b74bc-41ac-4d65-b833-cb269d199ddd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.610420 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwn8s\" (UniqueName: \"kubernetes.io/projected/6c019342-bb58-4587-b4fb-24f0641905b1-kube-api-access-bwn8s\") pod \"cluster-image-registry-operator-dc59b4c8b-8rm5b\" (UID: \"6c019342-bb58-4587-b4fb-24f0641905b1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8rm5b" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.610455 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d9ebba92-4a0b-494f-995e-09ebdcd65a6e-tmpfs\") pod \"packageserver-d55dfcdfc-rgvbd\" (UID: \"d9ebba92-4a0b-494f-995e-09ebdcd65a6e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rgvbd" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.610473 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7v9g\" (UniqueName: \"kubernetes.io/projected/a71b41ac-3909-4658-8b06-703e8cdba663-kube-api-access-h7v9g\") pod \"catalog-operator-68c6474976-k7zxp\" (UID: \"a71b41ac-3909-4658-8b06-703e8cdba663\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k7zxp" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.610490 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms5qw\" (UniqueName: \"kubernetes.io/projected/a26b74bc-41ac-4d65-b833-cb269d199ddd-kube-api-access-ms5qw\") pod \"route-controller-manager-6576b87f9c-kxg69\" (UID: \"a26b74bc-41ac-4d65-b833-cb269d199ddd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.610506 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c019342-bb58-4587-b4fb-24f0641905b1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8rm5b\" (UID: \"6c019342-bb58-4587-b4fb-24f0641905b1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8rm5b" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.610522 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba-etcd-client\") pod \"apiserver-76f77b778f-754n6\" (UID: \"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba\") " pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.610537 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f9d412f-3812-4b13-8ba6-e8b79a5cb6f2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-b2qfj\" (UID: \"8f9d412f-3812-4b13-8ba6-e8b79a5cb6f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2qfj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.610559 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc132ae0-d7bd-4064-89ad-4f9a57e76369-trusted-ca-bundle\") pod \"console-f9d7485db-x546d\" (UID: \"fc132ae0-d7bd-4064-89ad-4f9a57e76369\") " pod="openshift-console/console-f9d7485db-x546d" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.610581 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13f10968-6869-4ee6-975a-29260f3914ba-serving-cert\") pod \"apiserver-7bbb656c7d-m2vrm\" (UID: \"13f10968-6869-4ee6-975a-29260f3914ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.610603 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzlbc\" (UniqueName: \"kubernetes.io/projected/cf994913-919d-4268-b606-5b0286e721d0-kube-api-access-pzlbc\") pod \"openshift-controller-manager-operator-756b6f6bc6-62jdj\" (UID: \"cf994913-919d-4268-b606-5b0286e721d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-62jdj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.610626 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.612726 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.614330 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-audit-policies\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.617291 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.619020 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/434785a5-a04b-42f1-8f70-d12238df0eff-config\") pod \"authentication-operator-69f744f599-kw69h\" (UID: \"434785a5-a04b-42f1-8f70-d12238df0eff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kw69h" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.620497 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.621303 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/654b71d2-08e7-4c5a-b27c-a6fb253b845c-config\") pod \"console-operator-58897d9998-2lcjl\" (UID: \"654b71d2-08e7-4c5a-b27c-a6fb253b845c\") " pod="openshift-console-operator/console-operator-58897d9998-2lcjl" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.637533 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.622411 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.623280 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/654b71d2-08e7-4c5a-b27c-a6fb253b845c-serving-cert\") pod \"console-operator-58897d9998-2lcjl\" (UID: \"654b71d2-08e7-4c5a-b27c-a6fb253b845c\") " pod="openshift-console-operator/console-operator-58897d9998-2lcjl" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.623490 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/434785a5-a04b-42f1-8f70-d12238df0eff-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kw69h\" (UID: \"434785a5-a04b-42f1-8f70-d12238df0eff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kw69h" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.624173 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.624301 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/434785a5-a04b-42f1-8f70-d12238df0eff-service-ca-bundle\") pod \"authentication-operator-69f744f599-kw69h\" (UID: \"434785a5-a04b-42f1-8f70-d12238df0eff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kw69h" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.625644 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.626189 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba-config\") pod \"apiserver-76f77b778f-754n6\" (UID: \"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba\") " pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.626453 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba-node-pullsecrets\") pod \"apiserver-76f77b778f-754n6\" (UID: \"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba\") " pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.626783 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba-serving-cert\") pod \"apiserver-76f77b778f-754n6\" (UID: \"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba\") " pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.627092 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.627172 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7b288e53-1206-4832-8d01-94dd9d33f9dd-images\") pod \"machine-api-operator-5694c8668f-wrg86\" (UID: \"7b288e53-1206-4832-8d01-94dd9d33f9dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wrg86" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.627195 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5fm8f"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.627361 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/654b71d2-08e7-4c5a-b27c-a6fb253b845c-trusted-ca\") pod \"console-operator-58897d9998-2lcjl\" (UID: \"654b71d2-08e7-4c5a-b27c-a6fb253b845c\") " pod="openshift-console-operator/console-operator-58897d9998-2lcjl" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.627406 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba-etcd-serving-ca\") pod \"apiserver-76f77b778f-754n6\" (UID: \"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba\") " pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.627918 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-audit-dir\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.629074 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba-trusted-ca-bundle\") pod \"apiserver-76f77b778f-754n6\" (UID: \"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba\") " pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.629403 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.630634 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba-etcd-client\") pod \"apiserver-76f77b778f-754n6\" (UID: \"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba\") " pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.631656 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b288e53-1206-4832-8d01-94dd9d33f9dd-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wrg86\" (UID: \"7b288e53-1206-4832-8d01-94dd9d33f9dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wrg86" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.634479 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/434785a5-a04b-42f1-8f70-d12238df0eff-serving-cert\") pod \"authentication-operator-69f744f599-kw69h\" (UID: \"434785a5-a04b-42f1-8f70-d12238df0eff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kw69h" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.636476 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba-encryption-config\") pod \"apiserver-76f77b778f-754n6\" (UID: \"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba\") " pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.637172 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.637217 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba-image-import-ca\") pod \"apiserver-76f77b778f-754n6\" (UID: \"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba\") " pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.637221 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.637243 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/768b7312-75b7-4135-92ef-dbf8a089efb3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rqt6m\" (UID: \"768b7312-75b7-4135-92ef-dbf8a089efb3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rqt6m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.637398 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b288e53-1206-4832-8d01-94dd9d33f9dd-config\") pod \"machine-api-operator-5694c8668f-wrg86\" (UID: \"7b288e53-1206-4832-8d01-94dd9d33f9dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wrg86" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.622338 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.633548 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.638893 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-6x9j4"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.633667 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.639023 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.639057 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.638992 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.641577 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-mrdmc"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.642067 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-4b7sz"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.642269 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-mrdmc" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.642913 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pz47m"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.643016 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-4b7sz" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.643809 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-f9jwz"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.643915 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.644138 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz47m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.644404 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rqt6m"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.644531 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2lcjl"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.644598 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kw69h"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.644700 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-f9jwz" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.645055 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-6x9j4" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.650582 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.651015 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-754n6"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.653193 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wrg86"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.654562 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-62jdj"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.656234 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tjszx"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.657895 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2qfj"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.659990 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k7zxp"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.663061 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-x546d"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.665638 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwjph"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.665864 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rgvbd"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.668835 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdjlm"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.669812 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-zchph"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.670957 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nzfkv"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.673182 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.674195 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d47ng"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.675348 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-l2gzj"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.677136 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-bjcvg"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.678278 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-bjcvg" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.678326 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-j5v8m"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.679877 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.680027 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-j5v8m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.680647 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4nlxj"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.682021 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29426625-n2qvr"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.683389 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-dh99j"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.684579 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.686116 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-pfb6w"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.687123 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8rm5b"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.688187 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kxzv9"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.689273 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l9g79"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.690981 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-bjcvg"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.691819 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tzfzc"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.690259 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.692644 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vpns8"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.694000 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-mrdmc"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.698775 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rw6wb"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.701020 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-drbvl"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.703844 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-4b7sz"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.705707 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5fm8f"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.709419 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-j864x"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.710352 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.711472 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/13f10968-6869-4ee6-975a-29260f3914ba-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-m2vrm\" (UID: \"13f10968-6869-4ee6-975a-29260f3914ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.711515 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4831deef-ffff-4e1d-9b1d-57903b35ae2b-signing-cabundle\") pod \"service-ca-9c57cc56f-dh99j\" (UID: \"4831deef-ffff-4e1d-9b1d-57903b35ae2b\") " pod="openshift-service-ca/service-ca-9c57cc56f-dh99j" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.711546 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c74ca261-ba68-4ce7-bf90-200deb0b2b11-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lwjph\" (UID: \"c74ca261-ba68-4ce7-bf90-200deb0b2b11\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwjph" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.711566 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbfhl\" (UniqueName: \"kubernetes.io/projected/c74ca261-ba68-4ce7-bf90-200deb0b2b11-kube-api-access-dbfhl\") pod \"kube-storage-version-migrator-operator-b67b599dd-lwjph\" (UID: \"c74ca261-ba68-4ce7-bf90-200deb0b2b11\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwjph" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.711590 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e283eb17-df64-4565-8f82-a8afac963d4c-secret-volume\") pod \"collect-profiles-29426625-n2qvr\" (UID: \"e283eb17-df64-4565-8f82-a8afac963d4c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426625-n2qvr" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.711614 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ggpw\" (UniqueName: \"kubernetes.io/projected/fe3fb891-3c52-49fc-9b5c-22c7dbde195b-kube-api-access-2ggpw\") pod \"openshift-apiserver-operator-796bbdcf4f-tzfzc\" (UID: \"fe3fb891-3c52-49fc-9b5c-22c7dbde195b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tzfzc" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.711640 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59e373cc-61e8-4c1a-9734-6a2120179e36-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nzfkv\" (UID: \"59e373cc-61e8-4c1a-9734-6a2120179e36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nzfkv" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.711661 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fc132ae0-d7bd-4064-89ad-4f9a57e76369-console-serving-cert\") pod \"console-f9d7485db-x546d\" (UID: \"fc132ae0-d7bd-4064-89ad-4f9a57e76369\") " pod="openshift-console/console-f9d7485db-x546d" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.711680 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/13f10968-6869-4ee6-975a-29260f3914ba-encryption-config\") pod \"apiserver-7bbb656c7d-m2vrm\" (UID: \"13f10968-6869-4ee6-975a-29260f3914ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.711700 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8763ae63-aa9e-4f5f-8e81-015aaa58e3a2-config\") pod \"service-ca-operator-777779d784-zchph\" (UID: \"8763ae63-aa9e-4f5f-8e81-015aaa58e3a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zchph" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.711720 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3c6809b6-c67a-45cf-b251-f20b62790313-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d47ng\" (UID: \"3c6809b6-c67a-45cf-b251-f20b62790313\") " pod="openshift-marketplace/marketplace-operator-79b997595-d47ng" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.711737 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e4b485d4-889f-4a16-8d5d-ebef14fdeb98-proxy-tls\") pod \"machine-config-controller-84d6567774-j864x\" (UID: \"e4b485d4-889f-4a16-8d5d-ebef14fdeb98\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j864x" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.711754 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17fbe4f6-c1a3-4e9f-b01c-0645f26b8ebe-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vpns8\" (UID: \"17fbe4f6-c1a3-4e9f-b01c-0645f26b8ebe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vpns8" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.711781 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkhgc\" (UniqueName: \"kubernetes.io/projected/d9ebba92-4a0b-494f-995e-09ebdcd65a6e-kube-api-access-zkhgc\") pod \"packageserver-d55dfcdfc-rgvbd\" (UID: \"d9ebba92-4a0b-494f-995e-09ebdcd65a6e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rgvbd" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.711811 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fc132ae0-d7bd-4064-89ad-4f9a57e76369-oauth-serving-cert\") pod \"console-f9d7485db-x546d\" (UID: \"fc132ae0-d7bd-4064-89ad-4f9a57e76369\") " pod="openshift-console/console-f9d7485db-x546d" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.711829 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z88h4\" (UniqueName: \"kubernetes.io/projected/3c6809b6-c67a-45cf-b251-f20b62790313-kube-api-access-z88h4\") pod \"marketplace-operator-79b997595-d47ng\" (UID: \"3c6809b6-c67a-45cf-b251-f20b62790313\") " pod="openshift-marketplace/marketplace-operator-79b997595-d47ng" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.711854 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxthz\" (UniqueName: \"kubernetes.io/projected/e283eb17-df64-4565-8f82-a8afac963d4c-kube-api-access-cxthz\") pod \"collect-profiles-29426625-n2qvr\" (UID: \"e283eb17-df64-4565-8f82-a8afac963d4c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426625-n2qvr" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.711871 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxzqx\" (UniqueName: \"kubernetes.io/projected/e4b485d4-889f-4a16-8d5d-ebef14fdeb98-kube-api-access-jxzqx\") pod \"machine-config-controller-84d6567774-j864x\" (UID: \"e4b485d4-889f-4a16-8d5d-ebef14fdeb98\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j864x" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.711898 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ce7d03d7-4dcc-4d25-909d-a0db72482053-machine-approver-tls\") pod \"machine-approver-56656f9798-wvqbr\" (UID: \"ce7d03d7-4dcc-4d25-909d-a0db72482053\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvqbr" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.711916 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a08f722d-f913-4c66-8b1f-5ad1285884cb-service-ca-bundle\") pod \"router-default-5444994796-zxh2f\" (UID: \"a08f722d-f913-4c66-8b1f-5ad1285884cb\") " pod="openshift-ingress/router-default-5444994796-zxh2f" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.711932 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f91707aa-5388-47ac-8bed-37b6e6450118-bound-sa-token\") pod \"ingress-operator-5b745b69d9-pfb6w\" (UID: \"f91707aa-5388-47ac-8bed-37b6e6450118\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pfb6w" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.711955 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fc132ae0-d7bd-4064-89ad-4f9a57e76369-service-ca\") pod \"console-f9d7485db-x546d\" (UID: \"fc132ae0-d7bd-4064-89ad-4f9a57e76369\") " pod="openshift-console/console-f9d7485db-x546d" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.711974 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13f10968-6869-4ee6-975a-29260f3914ba-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-m2vrm\" (UID: \"13f10968-6869-4ee6-975a-29260f3914ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.711991 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4831deef-ffff-4e1d-9b1d-57903b35ae2b-signing-key\") pod \"service-ca-9c57cc56f-dh99j\" (UID: \"4831deef-ffff-4e1d-9b1d-57903b35ae2b\") " pod="openshift-service-ca/service-ca-9c57cc56f-dh99j" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.712010 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwn8s\" (UniqueName: \"kubernetes.io/projected/6c019342-bb58-4587-b4fb-24f0641905b1-kube-api-access-bwn8s\") pod \"cluster-image-registry-operator-dc59b4c8b-8rm5b\" (UID: \"6c019342-bb58-4587-b4fb-24f0641905b1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8rm5b" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.712029 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d9ebba92-4a0b-494f-995e-09ebdcd65a6e-tmpfs\") pod \"packageserver-d55dfcdfc-rgvbd\" (UID: \"d9ebba92-4a0b-494f-995e-09ebdcd65a6e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rgvbd" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.712047 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms5qw\" (UniqueName: \"kubernetes.io/projected/a26b74bc-41ac-4d65-b833-cb269d199ddd-kube-api-access-ms5qw\") pod \"route-controller-manager-6576b87f9c-kxg69\" (UID: \"a26b74bc-41ac-4d65-b833-cb269d199ddd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.712067 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aa74d23-cdf2-4758-82ff-aaf0017693c3-config\") pod \"kube-controller-manager-operator-78b949d7b-sdjlm\" (UID: \"0aa74d23-cdf2-4758-82ff-aaf0017693c3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdjlm" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.712085 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f91707aa-5388-47ac-8bed-37b6e6450118-metrics-tls\") pod \"ingress-operator-5b745b69d9-pfb6w\" (UID: \"f91707aa-5388-47ac-8bed-37b6e6450118\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pfb6w" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.712105 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13f10968-6869-4ee6-975a-29260f3914ba-audit-dir\") pod \"apiserver-7bbb656c7d-m2vrm\" (UID: \"13f10968-6869-4ee6-975a-29260f3914ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.712124 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/13f10968-6869-4ee6-975a-29260f3914ba-etcd-client\") pod \"apiserver-7bbb656c7d-m2vrm\" (UID: \"13f10968-6869-4ee6-975a-29260f3914ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.712142 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e4b485d4-889f-4a16-8d5d-ebef14fdeb98-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-j864x\" (UID: \"e4b485d4-889f-4a16-8d5d-ebef14fdeb98\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j864x" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.712161 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b486f774-cede-460b-bc98-a89766288e88-serving-cert\") pod \"openshift-config-operator-7777fb866f-4nlxj\" (UID: \"b486f774-cede-460b-bc98-a89766288e88\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nlxj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.712182 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tgtp\" (UniqueName: \"kubernetes.io/projected/1ac742ec-fd68-4c89-890f-b6117573eb88-kube-api-access-5tgtp\") pod \"package-server-manager-789f6589d5-rw6wb\" (UID: \"1ac742ec-fd68-4c89-890f-b6117573eb88\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rw6wb" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.712205 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17fbe4f6-c1a3-4e9f-b01c-0645f26b8ebe-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vpns8\" (UID: \"17fbe4f6-c1a3-4e9f-b01c-0645f26b8ebe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vpns8" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.712225 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a71b41ac-3909-4658-8b06-703e8cdba663-profile-collector-cert\") pod \"catalog-operator-68c6474976-k7zxp\" (UID: \"a71b41ac-3909-4658-8b06-703e8cdba663\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k7zxp" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.712303 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59e373cc-61e8-4c1a-9734-6a2120179e36-config\") pod \"controller-manager-879f6c89f-nzfkv\" (UID: \"59e373cc-61e8-4c1a-9734-6a2120179e36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nzfkv" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.712824 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/13f10968-6869-4ee6-975a-29260f3914ba-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-m2vrm\" (UID: \"13f10968-6869-4ee6-975a-29260f3914ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.713389 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fc132ae0-d7bd-4064-89ad-4f9a57e76369-service-ca\") pod \"console-f9d7485db-x546d\" (UID: \"fc132ae0-d7bd-4064-89ad-4f9a57e76369\") " pod="openshift-console/console-f9d7485db-x546d" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.714088 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13f10968-6869-4ee6-975a-29260f3914ba-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-m2vrm\" (UID: \"13f10968-6869-4ee6-975a-29260f3914ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.714266 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d9ebba92-4a0b-494f-995e-09ebdcd65a6e-webhook-cert\") pod \"packageserver-d55dfcdfc-rgvbd\" (UID: \"d9ebba92-4a0b-494f-995e-09ebdcd65a6e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rgvbd" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.714311 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26gzk\" (UniqueName: \"kubernetes.io/projected/a08f722d-f913-4c66-8b1f-5ad1285884cb-kube-api-access-26gzk\") pod \"router-default-5444994796-zxh2f\" (UID: \"a08f722d-f913-4c66-8b1f-5ad1285884cb\") " pod="openshift-ingress/router-default-5444994796-zxh2f" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.714336 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17fbe4f6-c1a3-4e9f-b01c-0645f26b8ebe-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vpns8\" (UID: \"17fbe4f6-c1a3-4e9f-b01c-0645f26b8ebe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vpns8" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.714384 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f9d412f-3812-4b13-8ba6-e8b79a5cb6f2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-b2qfj\" (UID: \"8f9d412f-3812-4b13-8ba6-e8b79a5cb6f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2qfj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.714406 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6hj5\" (UniqueName: \"kubernetes.io/projected/59e373cc-61e8-4c1a-9734-6a2120179e36-kube-api-access-w6hj5\") pod \"controller-manager-879f6c89f-nzfkv\" (UID: \"59e373cc-61e8-4c1a-9734-6a2120179e36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nzfkv" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.714499 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ca84cfba-243f-4342-9550-db6256f0c2f8-srv-cert\") pod \"olm-operator-6b444d44fb-drbvl\" (UID: \"ca84cfba-243f-4342-9550-db6256f0c2f8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-drbvl" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.714523 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f91707aa-5388-47ac-8bed-37b6e6450118-trusted-ca\") pod \"ingress-operator-5b745b69d9-pfb6w\" (UID: \"f91707aa-5388-47ac-8bed-37b6e6450118\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pfb6w" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.714546 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9sfm\" (UniqueName: \"kubernetes.io/projected/b486f774-cede-460b-bc98-a89766288e88-kube-api-access-w9sfm\") pod \"openshift-config-operator-7777fb866f-4nlxj\" (UID: \"b486f774-cede-460b-bc98-a89766288e88\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nlxj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.714564 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0aa74d23-cdf2-4758-82ff-aaf0017693c3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-sdjlm\" (UID: \"0aa74d23-cdf2-4758-82ff-aaf0017693c3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdjlm" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.714586 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcb2f\" (UniqueName: \"kubernetes.io/projected/867694b5-7481-462e-b143-379b53b0ad6e-kube-api-access-tcb2f\") pod \"migrator-59844c95c7-kxzv9\" (UID: \"867694b5-7481-462e-b143-379b53b0ad6e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kxzv9" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.714607 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c019342-bb58-4587-b4fb-24f0641905b1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8rm5b\" (UID: \"6c019342-bb58-4587-b4fb-24f0641905b1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8rm5b" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.714628 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf994913-919d-4268-b606-5b0286e721d0-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-62jdj\" (UID: \"cf994913-919d-4268-b606-5b0286e721d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-62jdj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.714286 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13f10968-6869-4ee6-975a-29260f3914ba-audit-dir\") pod \"apiserver-7bbb656c7d-m2vrm\" (UID: \"13f10968-6869-4ee6-975a-29260f3914ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.715938 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59e373cc-61e8-4c1a-9734-6a2120179e36-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nzfkv\" (UID: \"59e373cc-61e8-4c1a-9734-6a2120179e36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nzfkv" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.716073 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fc132ae0-d7bd-4064-89ad-4f9a57e76369-oauth-serving-cert\") pod \"console-f9d7485db-x546d\" (UID: \"fc132ae0-d7bd-4064-89ad-4f9a57e76369\") " pod="openshift-console/console-f9d7485db-x546d" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.716821 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b486f774-cede-460b-bc98-a89766288e88-serving-cert\") pod \"openshift-config-operator-7777fb866f-4nlxj\" (UID: \"b486f774-cede-460b-bc98-a89766288e88\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nlxj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.716925 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf994913-919d-4268-b606-5b0286e721d0-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-62jdj\" (UID: \"cf994913-919d-4268-b606-5b0286e721d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-62jdj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.717000 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59e373cc-61e8-4c1a-9734-6a2120179e36-config\") pod \"controller-manager-879f6c89f-nzfkv\" (UID: \"59e373cc-61e8-4c1a-9734-6a2120179e36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nzfkv" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.717108 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a71b41ac-3909-4658-8b06-703e8cdba663-profile-collector-cert\") pod \"catalog-operator-68c6474976-k7zxp\" (UID: \"a71b41ac-3909-4658-8b06-703e8cdba663\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k7zxp" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.717135 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/13f10968-6869-4ee6-975a-29260f3914ba-etcd-client\") pod \"apiserver-7bbb656c7d-m2vrm\" (UID: \"13f10968-6869-4ee6-975a-29260f3914ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.717300 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6sxv\" (UniqueName: \"kubernetes.io/projected/fc132ae0-d7bd-4064-89ad-4f9a57e76369-kube-api-access-g6sxv\") pod \"console-f9d7485db-x546d\" (UID: \"fc132ae0-d7bd-4064-89ad-4f9a57e76369\") " pod="openshift-console/console-f9d7485db-x546d" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.717342 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrvtc\" (UniqueName: \"kubernetes.io/projected/c3ad1798-0983-4d00-ae6d-aef7143647f3-kube-api-access-xrvtc\") pod \"control-plane-machine-set-operator-78cbb6b69f-l9g79\" (UID: \"c3ad1798-0983-4d00-ae6d-aef7143647f3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l9g79" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.717661 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a26b74bc-41ac-4d65-b833-cb269d199ddd-serving-cert\") pod \"route-controller-manager-6576b87f9c-kxg69\" (UID: \"a26b74bc-41ac-4d65-b833-cb269d199ddd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.717702 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/13f10968-6869-4ee6-975a-29260f3914ba-encryption-config\") pod \"apiserver-7bbb656c7d-m2vrm\" (UID: \"13f10968-6869-4ee6-975a-29260f3914ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.717835 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13f10968-6869-4ee6-975a-29260f3914ba-audit-policies\") pod \"apiserver-7bbb656c7d-m2vrm\" (UID: \"13f10968-6869-4ee6-975a-29260f3914ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.717921 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f9d412f-3812-4b13-8ba6-e8b79a5cb6f2-config\") pod \"kube-apiserver-operator-766d6c64bb-b2qfj\" (UID: \"8f9d412f-3812-4b13-8ba6-e8b79a5cb6f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2qfj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.718049 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pkzd\" (UniqueName: \"kubernetes.io/projected/ce7d03d7-4dcc-4d25-909d-a0db72482053-kube-api-access-6pkzd\") pod \"machine-approver-56656f9798-wvqbr\" (UID: \"ce7d03d7-4dcc-4d25-909d-a0db72482053\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvqbr" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.718067 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d9ebba92-4a0b-494f-995e-09ebdcd65a6e-tmpfs\") pod \"packageserver-d55dfcdfc-rgvbd\" (UID: \"d9ebba92-4a0b-494f-995e-09ebdcd65a6e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rgvbd" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.718094 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ac742ec-fd68-4c89-890f-b6117573eb88-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rw6wb\" (UID: \"1ac742ec-fd68-4c89-890f-b6117573eb88\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rw6wb" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.718359 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3ad1798-0983-4d00-ae6d-aef7143647f3-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-l9g79\" (UID: \"c3ad1798-0983-4d00-ae6d-aef7143647f3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l9g79" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.718539 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a08f722d-f913-4c66-8b1f-5ad1285884cb-stats-auth\") pod \"router-default-5444994796-zxh2f\" (UID: \"a08f722d-f913-4c66-8b1f-5ad1285884cb\") " pod="openshift-ingress/router-default-5444994796-zxh2f" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.718583 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mfbw\" (UniqueName: \"kubernetes.io/projected/f91707aa-5388-47ac-8bed-37b6e6450118-kube-api-access-2mfbw\") pod \"ingress-operator-5b745b69d9-pfb6w\" (UID: \"f91707aa-5388-47ac-8bed-37b6e6450118\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pfb6w" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.718620 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fc132ae0-d7bd-4064-89ad-4f9a57e76369-console-oauth-config\") pod \"console-f9d7485db-x546d\" (UID: \"fc132ae0-d7bd-4064-89ad-4f9a57e76369\") " pod="openshift-console/console-f9d7485db-x546d" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.718646 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfbpr\" (UniqueName: \"kubernetes.io/projected/13f10968-6869-4ee6-975a-29260f3914ba-kube-api-access-wfbpr\") pod \"apiserver-7bbb656c7d-m2vrm\" (UID: \"13f10968-6869-4ee6-975a-29260f3914ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.718666 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a08f722d-f913-4c66-8b1f-5ad1285884cb-metrics-certs\") pod \"router-default-5444994796-zxh2f\" (UID: \"a08f722d-f913-4c66-8b1f-5ad1285884cb\") " pod="openshift-ingress/router-default-5444994796-zxh2f" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.718686 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13f10968-6869-4ee6-975a-29260f3914ba-audit-policies\") pod \"apiserver-7bbb656c7d-m2vrm\" (UID: \"13f10968-6869-4ee6-975a-29260f3914ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.718746 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf994913-919d-4268-b606-5b0286e721d0-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-62jdj\" (UID: \"cf994913-919d-4268-b606-5b0286e721d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-62jdj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.718832 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe3fb891-3c52-49fc-9b5c-22c7dbde195b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tzfzc\" (UID: \"fe3fb891-3c52-49fc-9b5c-22c7dbde195b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tzfzc" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.718888 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f9d412f-3812-4b13-8ba6-e8b79a5cb6f2-config\") pod \"kube-apiserver-operator-766d6c64bb-b2qfj\" (UID: \"8f9d412f-3812-4b13-8ba6-e8b79a5cb6f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2qfj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.719207 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fc132ae0-d7bd-4064-89ad-4f9a57e76369-console-config\") pod \"console-f9d7485db-x546d\" (UID: \"fc132ae0-d7bd-4064-89ad-4f9a57e76369\") " pod="openshift-console/console-f9d7485db-x546d" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.719336 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a26b74bc-41ac-4d65-b833-cb269d199ddd-config\") pod \"route-controller-manager-6576b87f9c-kxg69\" (UID: \"a26b74bc-41ac-4d65-b833-cb269d199ddd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.719372 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7v9g\" (UniqueName: \"kubernetes.io/projected/a71b41ac-3909-4658-8b06-703e8cdba663-kube-api-access-h7v9g\") pod \"catalog-operator-68c6474976-k7zxp\" (UID: \"a71b41ac-3909-4658-8b06-703e8cdba663\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k7zxp" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.719504 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e283eb17-df64-4565-8f82-a8afac963d4c-config-volume\") pod \"collect-profiles-29426625-n2qvr\" (UID: \"e283eb17-df64-4565-8f82-a8afac963d4c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426625-n2qvr" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.719534 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f9d412f-3812-4b13-8ba6-e8b79a5cb6f2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-b2qfj\" (UID: \"8f9d412f-3812-4b13-8ba6-e8b79a5cb6f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2qfj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.719558 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c019342-bb58-4587-b4fb-24f0641905b1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8rm5b\" (UID: \"6c019342-bb58-4587-b4fb-24f0641905b1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8rm5b" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.719559 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe3fb891-3c52-49fc-9b5c-22c7dbde195b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tzfzc\" (UID: \"fe3fb891-3c52-49fc-9b5c-22c7dbde195b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tzfzc" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.719681 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc132ae0-d7bd-4064-89ad-4f9a57e76369-trusted-ca-bundle\") pod \"console-f9d7485db-x546d\" (UID: \"fc132ae0-d7bd-4064-89ad-4f9a57e76369\") " pod="openshift-console/console-f9d7485db-x546d" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.719706 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13f10968-6869-4ee6-975a-29260f3914ba-serving-cert\") pod \"apiserver-7bbb656c7d-m2vrm\" (UID: \"13f10968-6869-4ee6-975a-29260f3914ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.719727 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzlbc\" (UniqueName: \"kubernetes.io/projected/cf994913-919d-4268-b606-5b0286e721d0-kube-api-access-pzlbc\") pod \"openshift-controller-manager-operator-756b6f6bc6-62jdj\" (UID: \"cf994913-919d-4268-b606-5b0286e721d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-62jdj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.719821 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnv85\" (UniqueName: \"kubernetes.io/projected/4831deef-ffff-4e1d-9b1d-57903b35ae2b-kube-api-access-vnv85\") pod \"service-ca-9c57cc56f-dh99j\" (UID: \"4831deef-ffff-4e1d-9b1d-57903b35ae2b\") " pod="openshift-service-ca/service-ca-9c57cc56f-dh99j" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.719852 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe3fb891-3c52-49fc-9b5c-22c7dbde195b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tzfzc\" (UID: \"fe3fb891-3c52-49fc-9b5c-22c7dbde195b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tzfzc" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.720384 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c019342-bb58-4587-b4fb-24f0641905b1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8rm5b\" (UID: \"6c019342-bb58-4587-b4fb-24f0641905b1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8rm5b" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.720600 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fc132ae0-d7bd-4064-89ad-4f9a57e76369-console-serving-cert\") pod \"console-f9d7485db-x546d\" (UID: \"fc132ae0-d7bd-4064-89ad-4f9a57e76369\") " pod="openshift-console/console-f9d7485db-x546d" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.720812 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fc132ae0-d7bd-4064-89ad-4f9a57e76369-console-config\") pod \"console-f9d7485db-x546d\" (UID: \"fc132ae0-d7bd-4064-89ad-4f9a57e76369\") " pod="openshift-console/console-f9d7485db-x546d" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.721076 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a26b74bc-41ac-4d65-b833-cb269d199ddd-config\") pod \"route-controller-manager-6576b87f9c-kxg69\" (UID: \"a26b74bc-41ac-4d65-b833-cb269d199ddd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.721149 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce7d03d7-4dcc-4d25-909d-a0db72482053-auth-proxy-config\") pod \"machine-approver-56656f9798-wvqbr\" (UID: \"ce7d03d7-4dcc-4d25-909d-a0db72482053\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvqbr" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.721823 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c019342-bb58-4587-b4fb-24f0641905b1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8rm5b\" (UID: \"6c019342-bb58-4587-b4fb-24f0641905b1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8rm5b" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.721919 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce7d03d7-4dcc-4d25-909d-a0db72482053-config\") pod \"machine-approver-56656f9798-wvqbr\" (UID: \"ce7d03d7-4dcc-4d25-909d-a0db72482053\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvqbr" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.721991 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a26b74bc-41ac-4d65-b833-cb269d199ddd-client-ca\") pod \"route-controller-manager-6576b87f9c-kxg69\" (UID: \"a26b74bc-41ac-4d65-b833-cb269d199ddd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.722028 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8763ae63-aa9e-4f5f-8e81-015aaa58e3a2-serving-cert\") pod \"service-ca-operator-777779d784-zchph\" (UID: \"8763ae63-aa9e-4f5f-8e81-015aaa58e3a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zchph" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.722048 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3c6809b6-c67a-45cf-b251-f20b62790313-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d47ng\" (UID: \"3c6809b6-c67a-45cf-b251-f20b62790313\") " pod="openshift-marketplace/marketplace-operator-79b997595-d47ng" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.722168 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce7d03d7-4dcc-4d25-909d-a0db72482053-auth-proxy-config\") pod \"machine-approver-56656f9798-wvqbr\" (UID: \"ce7d03d7-4dcc-4d25-909d-a0db72482053\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvqbr" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.722239 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf994913-919d-4268-b606-5b0286e721d0-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-62jdj\" (UID: \"cf994913-919d-4268-b606-5b0286e721d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-62jdj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.722343 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c019342-bb58-4587-b4fb-24f0641905b1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8rm5b\" (UID: \"6c019342-bb58-4587-b4fb-24f0641905b1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8rm5b" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.722391 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbjl5\" (UniqueName: \"kubernetes.io/projected/41447db8-0fe6-4772-8bbb-12a68ba33f1e-kube-api-access-xbjl5\") pod \"downloads-7954f5f757-tjszx\" (UID: \"41447db8-0fe6-4772-8bbb-12a68ba33f1e\") " pod="openshift-console/downloads-7954f5f757-tjszx" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.722481 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d9ebba92-4a0b-494f-995e-09ebdcd65a6e-apiservice-cert\") pod \"packageserver-d55dfcdfc-rgvbd\" (UID: \"d9ebba92-4a0b-494f-995e-09ebdcd65a6e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rgvbd" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.722516 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59e373cc-61e8-4c1a-9734-6a2120179e36-serving-cert\") pod \"controller-manager-879f6c89f-nzfkv\" (UID: \"59e373cc-61e8-4c1a-9734-6a2120179e36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nzfkv" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.722543 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ca84cfba-243f-4342-9550-db6256f0c2f8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-drbvl\" (UID: \"ca84cfba-243f-4342-9550-db6256f0c2f8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-drbvl" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.722622 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0aa74d23-cdf2-4758-82ff-aaf0017693c3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-sdjlm\" (UID: \"0aa74d23-cdf2-4758-82ff-aaf0017693c3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdjlm" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.722656 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c74ca261-ba68-4ce7-bf90-200deb0b2b11-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lwjph\" (UID: \"c74ca261-ba68-4ce7-bf90-200deb0b2b11\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwjph" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.722678 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce7d03d7-4dcc-4d25-909d-a0db72482053-config\") pod \"machine-approver-56656f9798-wvqbr\" (UID: \"ce7d03d7-4dcc-4d25-909d-a0db72482053\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvqbr" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.722717 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a26b74bc-41ac-4d65-b833-cb269d199ddd-client-ca\") pod \"route-controller-manager-6576b87f9c-kxg69\" (UID: \"a26b74bc-41ac-4d65-b833-cb269d199ddd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.722874 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59e373cc-61e8-4c1a-9734-6a2120179e36-client-ca\") pod \"controller-manager-879f6c89f-nzfkv\" (UID: \"59e373cc-61e8-4c1a-9734-6a2120179e36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nzfkv" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.722900 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b486f774-cede-460b-bc98-a89766288e88-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4nlxj\" (UID: \"b486f774-cede-460b-bc98-a89766288e88\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nlxj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.723377 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc132ae0-d7bd-4064-89ad-4f9a57e76369-trusted-ca-bundle\") pod \"console-f9d7485db-x546d\" (UID: \"fc132ae0-d7bd-4064-89ad-4f9a57e76369\") " pod="openshift-console/console-f9d7485db-x546d" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.723507 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ce7d03d7-4dcc-4d25-909d-a0db72482053-machine-approver-tls\") pod \"machine-approver-56656f9798-wvqbr\" (UID: \"ce7d03d7-4dcc-4d25-909d-a0db72482053\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvqbr" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.723739 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13f10968-6869-4ee6-975a-29260f3914ba-serving-cert\") pod \"apiserver-7bbb656c7d-m2vrm\" (UID: \"13f10968-6869-4ee6-975a-29260f3914ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.723745 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f9d412f-3812-4b13-8ba6-e8b79a5cb6f2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-b2qfj\" (UID: \"8f9d412f-3812-4b13-8ba6-e8b79a5cb6f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2qfj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.723816 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdnf4\" (UniqueName: \"kubernetes.io/projected/ca84cfba-243f-4342-9550-db6256f0c2f8-kube-api-access-cdnf4\") pod \"olm-operator-6b444d44fb-drbvl\" (UID: \"ca84cfba-243f-4342-9550-db6256f0c2f8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-drbvl" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.723833 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b486f774-cede-460b-bc98-a89766288e88-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4nlxj\" (UID: \"b486f774-cede-460b-bc98-a89766288e88\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nlxj" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.723874 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr7v9\" (UniqueName: \"kubernetes.io/projected/8763ae63-aa9e-4f5f-8e81-015aaa58e3a2-kube-api-access-mr7v9\") pod \"service-ca-operator-777779d784-zchph\" (UID: \"8763ae63-aa9e-4f5f-8e81-015aaa58e3a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zchph" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.723899 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-6x9j4"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.723978 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a71b41ac-3909-4658-8b06-703e8cdba663-srv-cert\") pod \"catalog-operator-68c6474976-k7zxp\" (UID: \"a71b41ac-3909-4658-8b06-703e8cdba663\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k7zxp" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.724032 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a08f722d-f913-4c66-8b1f-5ad1285884cb-default-certificate\") pod \"router-default-5444994796-zxh2f\" (UID: \"a08f722d-f913-4c66-8b1f-5ad1285884cb\") " pod="openshift-ingress/router-default-5444994796-zxh2f" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.724880 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59e373cc-61e8-4c1a-9734-6a2120179e36-client-ca\") pod \"controller-manager-879f6c89f-nzfkv\" (UID: \"59e373cc-61e8-4c1a-9734-6a2120179e36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nzfkv" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.726097 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-j5v8m"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.727716 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a71b41ac-3909-4658-8b06-703e8cdba663-srv-cert\") pod \"catalog-operator-68c6474976-k7zxp\" (UID: \"a71b41ac-3909-4658-8b06-703e8cdba663\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k7zxp" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.728255 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59e373cc-61e8-4c1a-9734-6a2120179e36-serving-cert\") pod \"controller-manager-879f6c89f-nzfkv\" (UID: \"59e373cc-61e8-4c1a-9734-6a2120179e36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nzfkv" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.728297 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ca84cfba-243f-4342-9550-db6256f0c2f8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-drbvl\" (UID: \"ca84cfba-243f-4342-9550-db6256f0c2f8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-drbvl" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.728619 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-8ckg8"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.728808 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a26b74bc-41ac-4d65-b833-cb269d199ddd-serving-cert\") pod \"route-controller-manager-6576b87f9c-kxg69\" (UID: \"a26b74bc-41ac-4d65-b833-cb269d199ddd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.729626 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8ckg8" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.730558 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.730690 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe3fb891-3c52-49fc-9b5c-22c7dbde195b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tzfzc\" (UID: \"fe3fb891-3c52-49fc-9b5c-22c7dbde195b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tzfzc" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.730745 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pz47m"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.731551 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fc132ae0-d7bd-4064-89ad-4f9a57e76369-console-oauth-config\") pod \"console-f9d7485db-x546d\" (UID: \"fc132ae0-d7bd-4064-89ad-4f9a57e76369\") " pod="openshift-console/console-f9d7485db-x546d" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.731922 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8ckg8"] Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.751597 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.757911 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d9ebba92-4a0b-494f-995e-09ebdcd65a6e-apiservice-cert\") pod \"packageserver-d55dfcdfc-rgvbd\" (UID: \"d9ebba92-4a0b-494f-995e-09ebdcd65a6e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rgvbd" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.759839 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d9ebba92-4a0b-494f-995e-09ebdcd65a6e-webhook-cert\") pod \"packageserver-d55dfcdfc-rgvbd\" (UID: \"d9ebba92-4a0b-494f-995e-09ebdcd65a6e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rgvbd" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.770865 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.781745 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ca84cfba-243f-4342-9550-db6256f0c2f8-srv-cert\") pod \"olm-operator-6b444d44fb-drbvl\" (UID: \"ca84cfba-243f-4342-9550-db6256f0c2f8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-drbvl" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.789826 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.808928 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.824950 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z88h4\" (UniqueName: \"kubernetes.io/projected/3c6809b6-c67a-45cf-b251-f20b62790313-kube-api-access-z88h4\") pod \"marketplace-operator-79b997595-d47ng\" (UID: \"3c6809b6-c67a-45cf-b251-f20b62790313\") " pod="openshift-marketplace/marketplace-operator-79b997595-d47ng" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.825076 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxthz\" (UniqueName: \"kubernetes.io/projected/e283eb17-df64-4565-8f82-a8afac963d4c-kube-api-access-cxthz\") pod \"collect-profiles-29426625-n2qvr\" (UID: \"e283eb17-df64-4565-8f82-a8afac963d4c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426625-n2qvr" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.825176 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxzqx\" (UniqueName: \"kubernetes.io/projected/e4b485d4-889f-4a16-8d5d-ebef14fdeb98-kube-api-access-jxzqx\") pod \"machine-config-controller-84d6567774-j864x\" (UID: \"e4b485d4-889f-4a16-8d5d-ebef14fdeb98\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j864x" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.825288 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4831deef-ffff-4e1d-9b1d-57903b35ae2b-signing-key\") pod \"service-ca-9c57cc56f-dh99j\" (UID: \"4831deef-ffff-4e1d-9b1d-57903b35ae2b\") " pod="openshift-service-ca/service-ca-9c57cc56f-dh99j" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.825366 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a08f722d-f913-4c66-8b1f-5ad1285884cb-service-ca-bundle\") pod \"router-default-5444994796-zxh2f\" (UID: \"a08f722d-f913-4c66-8b1f-5ad1285884cb\") " pod="openshift-ingress/router-default-5444994796-zxh2f" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.825485 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f91707aa-5388-47ac-8bed-37b6e6450118-bound-sa-token\") pod \"ingress-operator-5b745b69d9-pfb6w\" (UID: \"f91707aa-5388-47ac-8bed-37b6e6450118\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pfb6w" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.825569 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81-serving-cert\") pod \"etcd-operator-b45778765-mrdmc\" (UID: \"13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mrdmc" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.825672 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aa74d23-cdf2-4758-82ff-aaf0017693c3-config\") pod \"kube-controller-manager-operator-78b949d7b-sdjlm\" (UID: \"0aa74d23-cdf2-4758-82ff-aaf0017693c3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdjlm" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.825772 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/59780c25-821e-416b-9437-6b78b8487221-metrics-tls\") pod \"dns-default-bjcvg\" (UID: \"59780c25-821e-416b-9437-6b78b8487221\") " pod="openshift-dns/dns-default-bjcvg" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.825867 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f91707aa-5388-47ac-8bed-37b6e6450118-metrics-tls\") pod \"ingress-operator-5b745b69d9-pfb6w\" (UID: \"f91707aa-5388-47ac-8bed-37b6e6450118\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pfb6w" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.825948 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e4b485d4-889f-4a16-8d5d-ebef14fdeb98-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-j864x\" (UID: \"e4b485d4-889f-4a16-8d5d-ebef14fdeb98\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j864x" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.826061 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17fbe4f6-c1a3-4e9f-b01c-0645f26b8ebe-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vpns8\" (UID: \"17fbe4f6-c1a3-4e9f-b01c-0645f26b8ebe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vpns8" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.826169 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/deca563a-35ff-43bc-8251-f3b7c80581b2-csi-data-dir\") pod \"csi-hostpathplugin-j5v8m\" (UID: \"deca563a-35ff-43bc-8251-f3b7c80581b2\") " pod="hostpath-provisioner/csi-hostpathplugin-j5v8m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.826263 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nh5p\" (UniqueName: \"kubernetes.io/projected/70b87a0c-be0a-447b-b901-2b327774f436-kube-api-access-7nh5p\") pod \"multus-admission-controller-857f4d67dd-4b7sz\" (UID: \"70b87a0c-be0a-447b-b901-2b327774f436\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4b7sz" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.826371 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81-etcd-service-ca\") pod \"etcd-operator-b45778765-mrdmc\" (UID: \"13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mrdmc" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.826486 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tgtp\" (UniqueName: \"kubernetes.io/projected/1ac742ec-fd68-4c89-890f-b6117573eb88-kube-api-access-5tgtp\") pod \"package-server-manager-789f6589d5-rw6wb\" (UID: \"1ac742ec-fd68-4c89-890f-b6117573eb88\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rw6wb" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.826597 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26gzk\" (UniqueName: \"kubernetes.io/projected/a08f722d-f913-4c66-8b1f-5ad1285884cb-kube-api-access-26gzk\") pod \"router-default-5444994796-zxh2f\" (UID: \"a08f722d-f913-4c66-8b1f-5ad1285884cb\") " pod="openshift-ingress/router-default-5444994796-zxh2f" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.826685 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17fbe4f6-c1a3-4e9f-b01c-0645f26b8ebe-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vpns8\" (UID: \"17fbe4f6-c1a3-4e9f-b01c-0645f26b8ebe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vpns8" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.826838 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f91707aa-5388-47ac-8bed-37b6e6450118-trusted-ca\") pod \"ingress-operator-5b745b69d9-pfb6w\" (UID: \"f91707aa-5388-47ac-8bed-37b6e6450118\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pfb6w" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.826926 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0aa74d23-cdf2-4758-82ff-aaf0017693c3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-sdjlm\" (UID: \"0aa74d23-cdf2-4758-82ff-aaf0017693c3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdjlm" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.827069 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrvtc\" (UniqueName: \"kubernetes.io/projected/c3ad1798-0983-4d00-ae6d-aef7143647f3-kube-api-access-xrvtc\") pod \"control-plane-machine-set-operator-78cbb6b69f-l9g79\" (UID: \"c3ad1798-0983-4d00-ae6d-aef7143647f3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l9g79" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.827175 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/084d14a7-1f30-49b0-9e05-3db228f5087c-images\") pod \"machine-config-operator-74547568cd-pz47m\" (UID: \"084d14a7-1f30-49b0-9e05-3db228f5087c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz47m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.827213 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e4b485d4-889f-4a16-8d5d-ebef14fdeb98-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-j864x\" (UID: \"e4b485d4-889f-4a16-8d5d-ebef14fdeb98\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j864x" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.827279 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ac742ec-fd68-4c89-890f-b6117573eb88-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rw6wb\" (UID: \"1ac742ec-fd68-4c89-890f-b6117573eb88\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rw6wb" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.827416 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3ad1798-0983-4d00-ae6d-aef7143647f3-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-l9g79\" (UID: \"c3ad1798-0983-4d00-ae6d-aef7143647f3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l9g79" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.827504 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mfbw\" (UniqueName: \"kubernetes.io/projected/f91707aa-5388-47ac-8bed-37b6e6450118-kube-api-access-2mfbw\") pod \"ingress-operator-5b745b69d9-pfb6w\" (UID: \"f91707aa-5388-47ac-8bed-37b6e6450118\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pfb6w" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.827551 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a08f722d-f913-4c66-8b1f-5ad1285884cb-stats-auth\") pod \"router-default-5444994796-zxh2f\" (UID: \"a08f722d-f913-4c66-8b1f-5ad1285884cb\") " pod="openshift-ingress/router-default-5444994796-zxh2f" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.827587 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/084d14a7-1f30-49b0-9e05-3db228f5087c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pz47m\" (UID: \"084d14a7-1f30-49b0-9e05-3db228f5087c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz47m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.827617 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81-etcd-client\") pod \"etcd-operator-b45778765-mrdmc\" (UID: \"13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mrdmc" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.827653 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a08f722d-f913-4c66-8b1f-5ad1285884cb-metrics-certs\") pod \"router-default-5444994796-zxh2f\" (UID: \"a08f722d-f913-4c66-8b1f-5ad1285884cb\") " pod="openshift-ingress/router-default-5444994796-zxh2f" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.827763 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59780c25-821e-416b-9437-6b78b8487221-config-volume\") pod \"dns-default-bjcvg\" (UID: \"59780c25-821e-416b-9437-6b78b8487221\") " pod="openshift-dns/dns-default-bjcvg" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.827837 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e283eb17-df64-4565-8f82-a8afac963d4c-config-volume\") pod \"collect-profiles-29426625-n2qvr\" (UID: \"e283eb17-df64-4565-8f82-a8afac963d4c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426625-n2qvr" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.827871 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/deca563a-35ff-43bc-8251-f3b7c80581b2-mountpoint-dir\") pod \"csi-hostpathplugin-j5v8m\" (UID: \"deca563a-35ff-43bc-8251-f3b7c80581b2\") " pod="hostpath-provisioner/csi-hostpathplugin-j5v8m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.827939 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/deca563a-35ff-43bc-8251-f3b7c80581b2-plugins-dir\") pod \"csi-hostpathplugin-j5v8m\" (UID: \"deca563a-35ff-43bc-8251-f3b7c80581b2\") " pod="hostpath-provisioner/csi-hostpathplugin-j5v8m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.828044 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-697gs\" (UniqueName: \"kubernetes.io/projected/13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81-kube-api-access-697gs\") pod \"etcd-operator-b45778765-mrdmc\" (UID: \"13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mrdmc" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.828137 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnv85\" (UniqueName: \"kubernetes.io/projected/4831deef-ffff-4e1d-9b1d-57903b35ae2b-kube-api-access-vnv85\") pod \"service-ca-9c57cc56f-dh99j\" (UID: \"4831deef-ffff-4e1d-9b1d-57903b35ae2b\") " pod="openshift-service-ca/service-ca-9c57cc56f-dh99j" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.828203 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pktsq\" (UniqueName: \"kubernetes.io/projected/084d14a7-1f30-49b0-9e05-3db228f5087c-kube-api-access-pktsq\") pod \"machine-config-operator-74547568cd-pz47m\" (UID: \"084d14a7-1f30-49b0-9e05-3db228f5087c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz47m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.828299 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3c6809b6-c67a-45cf-b251-f20b62790313-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d47ng\" (UID: \"3c6809b6-c67a-45cf-b251-f20b62790313\") " pod="openshift-marketplace/marketplace-operator-79b997595-d47ng" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.828360 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/deca563a-35ff-43bc-8251-f3b7c80581b2-registration-dir\") pod \"csi-hostpathplugin-j5v8m\" (UID: \"deca563a-35ff-43bc-8251-f3b7c80581b2\") " pod="hostpath-provisioner/csi-hostpathplugin-j5v8m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.828386 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8763ae63-aa9e-4f5f-8e81-015aaa58e3a2-serving-cert\") pod \"service-ca-operator-777779d784-zchph\" (UID: \"8763ae63-aa9e-4f5f-8e81-015aaa58e3a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zchph" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.828417 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78d9k\" (UniqueName: \"kubernetes.io/projected/deca563a-35ff-43bc-8251-f3b7c80581b2-kube-api-access-78d9k\") pod \"csi-hostpathplugin-j5v8m\" (UID: \"deca563a-35ff-43bc-8251-f3b7c80581b2\") " pod="hostpath-provisioner/csi-hostpathplugin-j5v8m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.828482 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0aa74d23-cdf2-4758-82ff-aaf0017693c3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-sdjlm\" (UID: \"0aa74d23-cdf2-4758-82ff-aaf0017693c3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdjlm" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.828508 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/70b87a0c-be0a-447b-b901-2b327774f436-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4b7sz\" (UID: \"70b87a0c-be0a-447b-b901-2b327774f436\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4b7sz" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.828535 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c74ca261-ba68-4ce7-bf90-200deb0b2b11-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lwjph\" (UID: \"c74ca261-ba68-4ce7-bf90-200deb0b2b11\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwjph" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.828559 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81-config\") pod \"etcd-operator-b45778765-mrdmc\" (UID: \"13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mrdmc" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.828594 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr7v9\" (UniqueName: \"kubernetes.io/projected/8763ae63-aa9e-4f5f-8e81-015aaa58e3a2-kube-api-access-mr7v9\") pod \"service-ca-operator-777779d784-zchph\" (UID: \"8763ae63-aa9e-4f5f-8e81-015aaa58e3a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zchph" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.828613 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81-etcd-ca\") pod \"etcd-operator-b45778765-mrdmc\" (UID: \"13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mrdmc" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.828699 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/084d14a7-1f30-49b0-9e05-3db228f5087c-proxy-tls\") pod \"machine-config-operator-74547568cd-pz47m\" (UID: \"084d14a7-1f30-49b0-9e05-3db228f5087c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz47m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.828860 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a08f722d-f913-4c66-8b1f-5ad1285884cb-default-certificate\") pod \"router-default-5444994796-zxh2f\" (UID: \"a08f722d-f913-4c66-8b1f-5ad1285884cb\") " pod="openshift-ingress/router-default-5444994796-zxh2f" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.828941 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c74ca261-ba68-4ce7-bf90-200deb0b2b11-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lwjph\" (UID: \"c74ca261-ba68-4ce7-bf90-200deb0b2b11\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwjph" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.828971 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbfhl\" (UniqueName: \"kubernetes.io/projected/c74ca261-ba68-4ce7-bf90-200deb0b2b11-kube-api-access-dbfhl\") pod \"kube-storage-version-migrator-operator-b67b599dd-lwjph\" (UID: \"c74ca261-ba68-4ce7-bf90-200deb0b2b11\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwjph" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.829001 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e283eb17-df64-4565-8f82-a8afac963d4c-secret-volume\") pod \"collect-profiles-29426625-n2qvr\" (UID: \"e283eb17-df64-4565-8f82-a8afac963d4c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426625-n2qvr" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.829030 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/deca563a-35ff-43bc-8251-f3b7c80581b2-socket-dir\") pod \"csi-hostpathplugin-j5v8m\" (UID: \"deca563a-35ff-43bc-8251-f3b7c80581b2\") " pod="hostpath-provisioner/csi-hostpathplugin-j5v8m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.829060 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4831deef-ffff-4e1d-9b1d-57903b35ae2b-signing-cabundle\") pod \"service-ca-9c57cc56f-dh99j\" (UID: \"4831deef-ffff-4e1d-9b1d-57903b35ae2b\") " pod="openshift-service-ca/service-ca-9c57cc56f-dh99j" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.829085 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c52p7\" (UniqueName: \"kubernetes.io/projected/59780c25-821e-416b-9437-6b78b8487221-kube-api-access-c52p7\") pod \"dns-default-bjcvg\" (UID: \"59780c25-821e-416b-9437-6b78b8487221\") " pod="openshift-dns/dns-default-bjcvg" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.829135 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e4b485d4-889f-4a16-8d5d-ebef14fdeb98-proxy-tls\") pod \"machine-config-controller-84d6567774-j864x\" (UID: \"e4b485d4-889f-4a16-8d5d-ebef14fdeb98\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j864x" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.829160 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17fbe4f6-c1a3-4e9f-b01c-0645f26b8ebe-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vpns8\" (UID: \"17fbe4f6-c1a3-4e9f-b01c-0645f26b8ebe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vpns8" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.829187 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8763ae63-aa9e-4f5f-8e81-015aaa58e3a2-config\") pod \"service-ca-operator-777779d784-zchph\" (UID: \"8763ae63-aa9e-4f5f-8e81-015aaa58e3a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zchph" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.829212 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3c6809b6-c67a-45cf-b251-f20b62790313-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d47ng\" (UID: \"3c6809b6-c67a-45cf-b251-f20b62790313\") " pod="openshift-marketplace/marketplace-operator-79b997595-d47ng" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.829880 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e283eb17-df64-4565-8f82-a8afac963d4c-config-volume\") pod \"collect-profiles-29426625-n2qvr\" (UID: \"e283eb17-df64-4565-8f82-a8afac963d4c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426625-n2qvr" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.833988 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.836450 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e283eb17-df64-4565-8f82-a8afac963d4c-secret-volume\") pod \"collect-profiles-29426625-n2qvr\" (UID: \"e283eb17-df64-4565-8f82-a8afac963d4c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426625-n2qvr" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.842897 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3c6809b6-c67a-45cf-b251-f20b62790313-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d47ng\" (UID: \"3c6809b6-c67a-45cf-b251-f20b62790313\") " pod="openshift-marketplace/marketplace-operator-79b997595-d47ng" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.850333 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.869787 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.883045 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ac742ec-fd68-4c89-890f-b6117573eb88-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rw6wb\" (UID: \"1ac742ec-fd68-4c89-890f-b6117573eb88\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rw6wb" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.890528 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.917298 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.920774 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3c6809b6-c67a-45cf-b251-f20b62790313-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d47ng\" (UID: \"3c6809b6-c67a-45cf-b251-f20b62790313\") " pod="openshift-marketplace/marketplace-operator-79b997595-d47ng" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.931607 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.932992 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/deca563a-35ff-43bc-8251-f3b7c80581b2-mountpoint-dir\") pod \"csi-hostpathplugin-j5v8m\" (UID: \"deca563a-35ff-43bc-8251-f3b7c80581b2\") " pod="hostpath-provisioner/csi-hostpathplugin-j5v8m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.933080 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/deca563a-35ff-43bc-8251-f3b7c80581b2-plugins-dir\") pod \"csi-hostpathplugin-j5v8m\" (UID: \"deca563a-35ff-43bc-8251-f3b7c80581b2\") " pod="hostpath-provisioner/csi-hostpathplugin-j5v8m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.933127 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/deca563a-35ff-43bc-8251-f3b7c80581b2-mountpoint-dir\") pod \"csi-hostpathplugin-j5v8m\" (UID: \"deca563a-35ff-43bc-8251-f3b7c80581b2\") " pod="hostpath-provisioner/csi-hostpathplugin-j5v8m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.933509 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/deca563a-35ff-43bc-8251-f3b7c80581b2-plugins-dir\") pod \"csi-hostpathplugin-j5v8m\" (UID: \"deca563a-35ff-43bc-8251-f3b7c80581b2\") " pod="hostpath-provisioner/csi-hostpathplugin-j5v8m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.933601 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-697gs\" (UniqueName: \"kubernetes.io/projected/13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81-kube-api-access-697gs\") pod \"etcd-operator-b45778765-mrdmc\" (UID: \"13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mrdmc" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.933740 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pktsq\" (UniqueName: \"kubernetes.io/projected/084d14a7-1f30-49b0-9e05-3db228f5087c-kube-api-access-pktsq\") pod \"machine-config-operator-74547568cd-pz47m\" (UID: \"084d14a7-1f30-49b0-9e05-3db228f5087c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz47m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.933980 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/deca563a-35ff-43bc-8251-f3b7c80581b2-registration-dir\") pod \"csi-hostpathplugin-j5v8m\" (UID: \"deca563a-35ff-43bc-8251-f3b7c80581b2\") " pod="hostpath-provisioner/csi-hostpathplugin-j5v8m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.934091 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/deca563a-35ff-43bc-8251-f3b7c80581b2-registration-dir\") pod \"csi-hostpathplugin-j5v8m\" (UID: \"deca563a-35ff-43bc-8251-f3b7c80581b2\") " pod="hostpath-provisioner/csi-hostpathplugin-j5v8m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.934168 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78d9k\" (UniqueName: \"kubernetes.io/projected/deca563a-35ff-43bc-8251-f3b7c80581b2-kube-api-access-78d9k\") pod \"csi-hostpathplugin-j5v8m\" (UID: \"deca563a-35ff-43bc-8251-f3b7c80581b2\") " pod="hostpath-provisioner/csi-hostpathplugin-j5v8m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.934350 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/70b87a0c-be0a-447b-b901-2b327774f436-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4b7sz\" (UID: \"70b87a0c-be0a-447b-b901-2b327774f436\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4b7sz" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.935015 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81-config\") pod \"etcd-operator-b45778765-mrdmc\" (UID: \"13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mrdmc" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.935074 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81-etcd-ca\") pod \"etcd-operator-b45778765-mrdmc\" (UID: \"13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mrdmc" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.935102 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/084d14a7-1f30-49b0-9e05-3db228f5087c-proxy-tls\") pod \"machine-config-operator-74547568cd-pz47m\" (UID: \"084d14a7-1f30-49b0-9e05-3db228f5087c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz47m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.935159 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/deca563a-35ff-43bc-8251-f3b7c80581b2-socket-dir\") pod \"csi-hostpathplugin-j5v8m\" (UID: \"deca563a-35ff-43bc-8251-f3b7c80581b2\") " pod="hostpath-provisioner/csi-hostpathplugin-j5v8m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.935182 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c52p7\" (UniqueName: \"kubernetes.io/projected/59780c25-821e-416b-9437-6b78b8487221-kube-api-access-c52p7\") pod \"dns-default-bjcvg\" (UID: \"59780c25-821e-416b-9437-6b78b8487221\") " pod="openshift-dns/dns-default-bjcvg" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.935322 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81-serving-cert\") pod \"etcd-operator-b45778765-mrdmc\" (UID: \"13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mrdmc" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.935376 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/59780c25-821e-416b-9437-6b78b8487221-metrics-tls\") pod \"dns-default-bjcvg\" (UID: \"59780c25-821e-416b-9437-6b78b8487221\") " pod="openshift-dns/dns-default-bjcvg" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.935417 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/deca563a-35ff-43bc-8251-f3b7c80581b2-csi-data-dir\") pod \"csi-hostpathplugin-j5v8m\" (UID: \"deca563a-35ff-43bc-8251-f3b7c80581b2\") " pod="hostpath-provisioner/csi-hostpathplugin-j5v8m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.935498 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nh5p\" (UniqueName: \"kubernetes.io/projected/70b87a0c-be0a-447b-b901-2b327774f436-kube-api-access-7nh5p\") pod \"multus-admission-controller-857f4d67dd-4b7sz\" (UID: \"70b87a0c-be0a-447b-b901-2b327774f436\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4b7sz" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.935519 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81-etcd-service-ca\") pod \"etcd-operator-b45778765-mrdmc\" (UID: \"13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mrdmc" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.935889 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/084d14a7-1f30-49b0-9e05-3db228f5087c-images\") pod \"machine-config-operator-74547568cd-pz47m\" (UID: \"084d14a7-1f30-49b0-9e05-3db228f5087c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz47m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.935966 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/084d14a7-1f30-49b0-9e05-3db228f5087c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pz47m\" (UID: \"084d14a7-1f30-49b0-9e05-3db228f5087c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz47m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.935987 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81-etcd-client\") pod \"etcd-operator-b45778765-mrdmc\" (UID: \"13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mrdmc" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.936028 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59780c25-821e-416b-9437-6b78b8487221-config-volume\") pod \"dns-default-bjcvg\" (UID: \"59780c25-821e-416b-9437-6b78b8487221\") " pod="openshift-dns/dns-default-bjcvg" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.936119 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/deca563a-35ff-43bc-8251-f3b7c80581b2-socket-dir\") pod \"csi-hostpathplugin-j5v8m\" (UID: \"deca563a-35ff-43bc-8251-f3b7c80581b2\") " pod="hostpath-provisioner/csi-hostpathplugin-j5v8m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.936376 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/deca563a-35ff-43bc-8251-f3b7c80581b2-csi-data-dir\") pod \"csi-hostpathplugin-j5v8m\" (UID: \"deca563a-35ff-43bc-8251-f3b7c80581b2\") " pod="hostpath-provisioner/csi-hostpathplugin-j5v8m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.937160 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/084d14a7-1f30-49b0-9e05-3db228f5087c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pz47m\" (UID: \"084d14a7-1f30-49b0-9e05-3db228f5087c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz47m" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.952356 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.971205 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 13 03:46:56 crc kubenswrapper[4766]: I1213 03:46:56.991067 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.003122 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c74ca261-ba68-4ce7-bf90-200deb0b2b11-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lwjph\" (UID: \"c74ca261-ba68-4ce7-bf90-200deb0b2b11\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwjph" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.010179 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.020345 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c74ca261-ba68-4ce7-bf90-200deb0b2b11-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lwjph\" (UID: \"c74ca261-ba68-4ce7-bf90-200deb0b2b11\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwjph" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.031052 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.049662 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.075453 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.084097 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3ad1798-0983-4d00-ae6d-aef7143647f3-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-l9g79\" (UID: \"c3ad1798-0983-4d00-ae6d-aef7143647f3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l9g79" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.090178 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.110020 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.123721 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8763ae63-aa9e-4f5f-8e81-015aaa58e3a2-serving-cert\") pod \"service-ca-operator-777779d784-zchph\" (UID: \"8763ae63-aa9e-4f5f-8e81-015aaa58e3a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zchph" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.129254 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.131137 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8763ae63-aa9e-4f5f-8e81-015aaa58e3a2-config\") pod \"service-ca-operator-777779d784-zchph\" (UID: \"8763ae63-aa9e-4f5f-8e81-015aaa58e3a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zchph" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.150835 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.170680 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.190359 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.201214 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17fbe4f6-c1a3-4e9f-b01c-0645f26b8ebe-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vpns8\" (UID: \"17fbe4f6-c1a3-4e9f-b01c-0645f26b8ebe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vpns8" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.209391 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.231125 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.251939 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.262207 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17fbe4f6-c1a3-4e9f-b01c-0645f26b8ebe-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vpns8\" (UID: \"17fbe4f6-c1a3-4e9f-b01c-0645f26b8ebe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vpns8" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.270858 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.291359 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.303338 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a08f722d-f913-4c66-8b1f-5ad1285884cb-metrics-certs\") pod \"router-default-5444994796-zxh2f\" (UID: \"a08f722d-f913-4c66-8b1f-5ad1285884cb\") " pod="openshift-ingress/router-default-5444994796-zxh2f" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.311918 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.330252 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.344592 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a08f722d-f913-4c66-8b1f-5ad1285884cb-default-certificate\") pod \"router-default-5444994796-zxh2f\" (UID: \"a08f722d-f913-4c66-8b1f-5ad1285884cb\") " pod="openshift-ingress/router-default-5444994796-zxh2f" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.351268 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.362110 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a08f722d-f913-4c66-8b1f-5ad1285884cb-stats-auth\") pod \"router-default-5444994796-zxh2f\" (UID: \"a08f722d-f913-4c66-8b1f-5ad1285884cb\") " pod="openshift-ingress/router-default-5444994796-zxh2f" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.372044 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.390104 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.397552 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a08f722d-f913-4c66-8b1f-5ad1285884cb-service-ca-bundle\") pod \"router-default-5444994796-zxh2f\" (UID: \"a08f722d-f913-4c66-8b1f-5ad1285884cb\") " pod="openshift-ingress/router-default-5444994796-zxh2f" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.411940 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.432663 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.449682 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.460126 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4831deef-ffff-4e1d-9b1d-57903b35ae2b-signing-key\") pod \"service-ca-9c57cc56f-dh99j\" (UID: \"4831deef-ffff-4e1d-9b1d-57903b35ae2b\") " pod="openshift-service-ca/service-ca-9c57cc56f-dh99j" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.470624 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.480789 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4831deef-ffff-4e1d-9b1d-57903b35ae2b-signing-cabundle\") pod \"service-ca-9c57cc56f-dh99j\" (UID: \"4831deef-ffff-4e1d-9b1d-57903b35ae2b\") " pod="openshift-service-ca/service-ca-9c57cc56f-dh99j" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.491457 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.509943 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.531382 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.550865 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.568715 4766 request.go:700] Waited for 1.01124404s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&limit=500&resourceVersion=0 Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.571369 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.582280 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f91707aa-5388-47ac-8bed-37b6e6450118-metrics-tls\") pod \"ingress-operator-5b745b69d9-pfb6w\" (UID: \"f91707aa-5388-47ac-8bed-37b6e6450118\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pfb6w" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.591192 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.623062 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.628836 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f91707aa-5388-47ac-8bed-37b6e6450118-trusted-ca\") pod \"ingress-operator-5b745b69d9-pfb6w\" (UID: \"f91707aa-5388-47ac-8bed-37b6e6450118\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pfb6w" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.630380 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.651001 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.665218 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0aa74d23-cdf2-4758-82ff-aaf0017693c3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-sdjlm\" (UID: \"0aa74d23-cdf2-4758-82ff-aaf0017693c3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdjlm" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.669954 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.678463 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aa74d23-cdf2-4758-82ff-aaf0017693c3-config\") pod \"kube-controller-manager-operator-78b949d7b-sdjlm\" (UID: \"0aa74d23-cdf2-4758-82ff-aaf0017693c3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdjlm" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.690468 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.707317 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e4b485d4-889f-4a16-8d5d-ebef14fdeb98-proxy-tls\") pod \"machine-config-controller-84d6567774-j864x\" (UID: \"e4b485d4-889f-4a16-8d5d-ebef14fdeb98\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j864x" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.730648 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.780813 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnfk2\" (UniqueName: \"kubernetes.io/projected/654b71d2-08e7-4c5a-b27c-a6fb253b845c-kube-api-access-wnfk2\") pod \"console-operator-58897d9998-2lcjl\" (UID: \"654b71d2-08e7-4c5a-b27c-a6fb253b845c\") " pod="openshift-console-operator/console-operator-58897d9998-2lcjl" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.790230 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs4q6\" (UniqueName: \"kubernetes.io/projected/7b288e53-1206-4832-8d01-94dd9d33f9dd-kube-api-access-zs4q6\") pod \"machine-api-operator-5694c8668f-wrg86\" (UID: \"7b288e53-1206-4832-8d01-94dd9d33f9dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wrg86" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.810142 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rljhx\" (UniqueName: \"kubernetes.io/projected/768b7312-75b7-4135-92ef-dbf8a089efb3-kube-api-access-rljhx\") pod \"cluster-samples-operator-665b6dd947-rqt6m\" (UID: \"768b7312-75b7-4135-92ef-dbf8a089efb3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rqt6m" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.827631 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t5cj\" (UniqueName: \"kubernetes.io/projected/d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba-kube-api-access-5t5cj\") pod \"apiserver-76f77b778f-754n6\" (UID: \"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba\") " pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.846310 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7t5r\" (UniqueName: \"kubernetes.io/projected/434785a5-a04b-42f1-8f70-d12238df0eff-kube-api-access-w7t5r\") pod \"authentication-operator-69f744f599-kw69h\" (UID: \"434785a5-a04b-42f1-8f70-d12238df0eff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kw69h" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.865901 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js9d4\" (UniqueName: \"kubernetes.io/projected/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-kube-api-access-js9d4\") pod \"oauth-openshift-558db77b4-l2gzj\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.866867 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.870126 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.890972 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.909568 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rqt6m" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.911095 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.931048 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.931922 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-2lcjl" Dec 13 03:46:57 crc kubenswrapper[4766]: E1213 03:46:57.935332 4766 secret.go:188] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Dec 13 03:46:57 crc kubenswrapper[4766]: E1213 03:46:57.935477 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70b87a0c-be0a-447b-b901-2b327774f436-webhook-certs podName:70b87a0c-be0a-447b-b901-2b327774f436 nodeName:}" failed. No retries permitted until 2025-12-13 03:46:58.435409817 +0000 UTC m=+149.945342781 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/70b87a0c-be0a-447b-b901-2b327774f436-webhook-certs") pod "multus-admission-controller-857f4d67dd-4b7sz" (UID: "70b87a0c-be0a-447b-b901-2b327774f436") : failed to sync secret cache: timed out waiting for the condition Dec 13 03:46:57 crc kubenswrapper[4766]: E1213 03:46:57.936499 4766 secret.go:188] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 13 03:46:57 crc kubenswrapper[4766]: E1213 03:46:57.936544 4766 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Dec 13 03:46:57 crc kubenswrapper[4766]: E1213 03:46:57.936591 4766 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Dec 13 03:46:57 crc kubenswrapper[4766]: E1213 03:46:57.936617 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81-serving-cert podName:13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81 nodeName:}" failed. No retries permitted until 2025-12-13 03:46:58.436590632 +0000 UTC m=+149.946523596 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81-serving-cert") pod "etcd-operator-b45778765-mrdmc" (UID: "13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81") : failed to sync secret cache: timed out waiting for the condition Dec 13 03:46:57 crc kubenswrapper[4766]: E1213 03:46:57.936619 4766 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Dec 13 03:46:57 crc kubenswrapper[4766]: E1213 03:46:57.936644 4766 secret.go:188] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Dec 13 03:46:57 crc kubenswrapper[4766]: E1213 03:46:57.936510 4766 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Dec 13 03:46:57 crc kubenswrapper[4766]: E1213 03:46:57.936653 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/59780c25-821e-416b-9437-6b78b8487221-config-volume podName:59780c25-821e-416b-9437-6b78b8487221 nodeName:}" failed. No retries permitted until 2025-12-13 03:46:58.436634843 +0000 UTC m=+149.946567947 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/59780c25-821e-416b-9437-6b78b8487221-config-volume") pod "dns-default-bjcvg" (UID: "59780c25-821e-416b-9437-6b78b8487221") : failed to sync configmap cache: timed out waiting for the condition Dec 13 03:46:57 crc kubenswrapper[4766]: E1213 03:46:57.936705 4766 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: failed to sync configmap cache: timed out waiting for the condition Dec 13 03:46:57 crc kubenswrapper[4766]: E1213 03:46:57.936735 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59780c25-821e-416b-9437-6b78b8487221-metrics-tls podName:59780c25-821e-416b-9437-6b78b8487221 nodeName:}" failed. No retries permitted until 2025-12-13 03:46:58.436717876 +0000 UTC m=+149.946650920 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/59780c25-821e-416b-9437-6b78b8487221-metrics-tls") pod "dns-default-bjcvg" (UID: "59780c25-821e-416b-9437-6b78b8487221") : failed to sync secret cache: timed out waiting for the condition Dec 13 03:46:57 crc kubenswrapper[4766]: E1213 03:46:57.936775 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81-etcd-ca podName:13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81 nodeName:}" failed. No retries permitted until 2025-12-13 03:46:58.436766117 +0000 UTC m=+149.946699141 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81-etcd-ca") pod "etcd-operator-b45778765-mrdmc" (UID: "13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81") : failed to sync configmap cache: timed out waiting for the condition Dec 13 03:46:57 crc kubenswrapper[4766]: E1213 03:46:57.936718 4766 secret.go:188] Couldn't get secret openshift-etcd-operator/etcd-client: failed to sync secret cache: timed out waiting for the condition Dec 13 03:46:57 crc kubenswrapper[4766]: E1213 03:46:57.936799 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/084d14a7-1f30-49b0-9e05-3db228f5087c-proxy-tls podName:084d14a7-1f30-49b0-9e05-3db228f5087c nodeName:}" failed. No retries permitted until 2025-12-13 03:46:58.436791678 +0000 UTC m=+149.946724712 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/084d14a7-1f30-49b0-9e05-3db228f5087c-proxy-tls") pod "machine-config-operator-74547568cd-pz47m" (UID: "084d14a7-1f30-49b0-9e05-3db228f5087c") : failed to sync secret cache: timed out waiting for the condition Dec 13 03:46:57 crc kubenswrapper[4766]: E1213 03:46:57.936825 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/084d14a7-1f30-49b0-9e05-3db228f5087c-images podName:084d14a7-1f30-49b0-9e05-3db228f5087c nodeName:}" failed. No retries permitted until 2025-12-13 03:46:58.436812869 +0000 UTC m=+149.946745933 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/084d14a7-1f30-49b0-9e05-3db228f5087c-images") pod "machine-config-operator-74547568cd-pz47m" (UID: "084d14a7-1f30-49b0-9e05-3db228f5087c") : failed to sync configmap cache: timed out waiting for the condition Dec 13 03:46:57 crc kubenswrapper[4766]: E1213 03:46:57.936851 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81-config podName:13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81 nodeName:}" failed. No retries permitted until 2025-12-13 03:46:58.43684341 +0000 UTC m=+149.946776374 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81-config") pod "etcd-operator-b45778765-mrdmc" (UID: "13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81") : failed to sync configmap cache: timed out waiting for the condition Dec 13 03:46:57 crc kubenswrapper[4766]: E1213 03:46:57.936871 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81-etcd-client podName:13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81 nodeName:}" failed. No retries permitted until 2025-12-13 03:46:58.43686377 +0000 UTC m=+149.946796834 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81-etcd-client") pod "etcd-operator-b45778765-mrdmc" (UID: "13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81") : failed to sync secret cache: timed out waiting for the condition Dec 13 03:46:57 crc kubenswrapper[4766]: E1213 03:46:57.936747 4766 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Dec 13 03:46:57 crc kubenswrapper[4766]: E1213 03:46:57.936907 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81-etcd-service-ca podName:13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81 nodeName:}" failed. No retries permitted until 2025-12-13 03:46:58.436899061 +0000 UTC m=+149.946832115 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81-etcd-service-ca") pod "etcd-operator-b45778765-mrdmc" (UID: "13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81") : failed to sync configmap cache: timed out waiting for the condition Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.950008 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.969837 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.990755 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 13 03:46:57 crc kubenswrapper[4766]: I1213 03:46:57.990913 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-kw69h" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.051274 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.066822 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-wrg86" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.073975 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.074213 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.074310 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.079283 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.090774 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.111030 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.132500 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.150389 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.170218 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.190560 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.221806 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.230388 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.251171 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.270341 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.281496 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2lcjl"] Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.290918 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.311327 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.330661 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.354660 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.373663 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.393300 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.401055 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wrg86"] Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.410620 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.429799 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.449532 4766 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.468812 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.487960 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/59780c25-821e-416b-9437-6b78b8487221-metrics-tls\") pod \"dns-default-bjcvg\" (UID: \"59780c25-821e-416b-9437-6b78b8487221\") " pod="openshift-dns/dns-default-bjcvg" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.488031 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81-etcd-service-ca\") pod \"etcd-operator-b45778765-mrdmc\" (UID: \"13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mrdmc" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.488139 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/084d14a7-1f30-49b0-9e05-3db228f5087c-images\") pod \"machine-config-operator-74547568cd-pz47m\" (UID: \"084d14a7-1f30-49b0-9e05-3db228f5087c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz47m" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.488180 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81-etcd-client\") pod \"etcd-operator-b45778765-mrdmc\" (UID: \"13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mrdmc" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.488207 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59780c25-821e-416b-9437-6b78b8487221-config-volume\") pod \"dns-default-bjcvg\" (UID: \"59780c25-821e-416b-9437-6b78b8487221\") " pod="openshift-dns/dns-default-bjcvg" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.488292 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/70b87a0c-be0a-447b-b901-2b327774f436-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4b7sz\" (UID: \"70b87a0c-be0a-447b-b901-2b327774f436\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4b7sz" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.488311 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81-config\") pod \"etcd-operator-b45778765-mrdmc\" (UID: \"13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mrdmc" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.488340 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81-etcd-ca\") pod \"etcd-operator-b45778765-mrdmc\" (UID: \"13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mrdmc" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.488359 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/084d14a7-1f30-49b0-9e05-3db228f5087c-proxy-tls\") pod \"machine-config-operator-74547568cd-pz47m\" (UID: \"084d14a7-1f30-49b0-9e05-3db228f5087c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz47m" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.488487 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81-serving-cert\") pod \"etcd-operator-b45778765-mrdmc\" (UID: \"13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mrdmc" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.493923 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81-etcd-ca\") pod \"etcd-operator-b45778765-mrdmc\" (UID: \"13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mrdmc" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.494137 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81-config\") pod \"etcd-operator-b45778765-mrdmc\" (UID: \"13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mrdmc" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.492270 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81-etcd-service-ca\") pod \"etcd-operator-b45778765-mrdmc\" (UID: \"13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mrdmc" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.497741 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81-etcd-client\") pod \"etcd-operator-b45778765-mrdmc\" (UID: \"13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mrdmc" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.498556 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/59780c25-821e-416b-9437-6b78b8487221-metrics-tls\") pod \"dns-default-bjcvg\" (UID: \"59780c25-821e-416b-9437-6b78b8487221\") " pod="openshift-dns/dns-default-bjcvg" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.499602 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/70b87a0c-be0a-447b-b901-2b327774f436-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4b7sz\" (UID: \"70b87a0c-be0a-447b-b901-2b327774f436\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4b7sz" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.499905 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81-serving-cert\") pod \"etcd-operator-b45778765-mrdmc\" (UID: \"13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mrdmc" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.500933 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/084d14a7-1f30-49b0-9e05-3db228f5087c-proxy-tls\") pod \"machine-config-operator-74547568cd-pz47m\" (UID: \"084d14a7-1f30-49b0-9e05-3db228f5087c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz47m" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.505197 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-754n6"] Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.506919 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rqt6m"] Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.507292 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkhgc\" (UniqueName: \"kubernetes.io/projected/d9ebba92-4a0b-494f-995e-09ebdcd65a6e-kube-api-access-zkhgc\") pod \"packageserver-d55dfcdfc-rgvbd\" (UID: \"d9ebba92-4a0b-494f-995e-09ebdcd65a6e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rgvbd" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.542022 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwn8s\" (UniqueName: \"kubernetes.io/projected/6c019342-bb58-4587-b4fb-24f0641905b1-kube-api-access-bwn8s\") pod \"cluster-image-registry-operator-dc59b4c8b-8rm5b\" (UID: \"6c019342-bb58-4587-b4fb-24f0641905b1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8rm5b" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.545471 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6hj5\" (UniqueName: \"kubernetes.io/projected/59e373cc-61e8-4c1a-9734-6a2120179e36-kube-api-access-w6hj5\") pod \"controller-manager-879f6c89f-nzfkv\" (UID: \"59e373cc-61e8-4c1a-9734-6a2120179e36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nzfkv" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.565990 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ggpw\" (UniqueName: \"kubernetes.io/projected/fe3fb891-3c52-49fc-9b5c-22c7dbde195b-kube-api-access-2ggpw\") pod \"openshift-apiserver-operator-796bbdcf4f-tzfzc\" (UID: \"fe3fb891-3c52-49fc-9b5c-22c7dbde195b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tzfzc" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.570722 4766 request.go:700] Waited for 1.853104186s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.573225 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-l2gzj"] Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.578800 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kw69h"] Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.591523 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms5qw\" (UniqueName: \"kubernetes.io/projected/a26b74bc-41ac-4d65-b833-cb269d199ddd-kube-api-access-ms5qw\") pod \"route-controller-manager-6576b87f9c-kxg69\" (UID: \"a26b74bc-41ac-4d65-b833-cb269d199ddd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.598210 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rgvbd" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.604385 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcb2f\" (UniqueName: \"kubernetes.io/projected/867694b5-7481-462e-b143-379b53b0ad6e-kube-api-access-tcb2f\") pod \"migrator-59844c95c7-kxzv9\" (UID: \"867694b5-7481-462e-b143-379b53b0ad6e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kxzv9" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.627471 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6sxv\" (UniqueName: \"kubernetes.io/projected/fc132ae0-d7bd-4064-89ad-4f9a57e76369-kube-api-access-g6sxv\") pod \"console-f9d7485db-x546d\" (UID: \"fc132ae0-d7bd-4064-89ad-4f9a57e76369\") " pod="openshift-console/console-f9d7485db-x546d" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.634987 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wrg86" event={"ID":"7b288e53-1206-4832-8d01-94dd9d33f9dd","Type":"ContainerStarted","Data":"c355c28bf4bbe25341aaa39b7db6074f334d3e7ce18b2bc895ddba173b64e5e9"} Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.638797 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-2lcjl" event={"ID":"654b71d2-08e7-4c5a-b27c-a6fb253b845c","Type":"ContainerStarted","Data":"bc20c60f8b8546e1b41abd115f06f4b0626bcffa7343cd539077fa284354083a"} Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.639673 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-754n6" event={"ID":"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba","Type":"ContainerStarted","Data":"ee399ba9a289cd693c9821cd8a1128778a9e18ddbcca2469c5e5a7607bd5ad28"} Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.648374 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9sfm\" (UniqueName: \"kubernetes.io/projected/b486f774-cede-460b-bc98-a89766288e88-kube-api-access-w9sfm\") pod \"openshift-config-operator-7777fb866f-4nlxj\" (UID: \"b486f774-cede-460b-bc98-a89766288e88\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nlxj" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.660038 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tzfzc" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.671840 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x546d" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.689578 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfbpr\" (UniqueName: \"kubernetes.io/projected/13f10968-6869-4ee6-975a-29260f3914ba-kube-api-access-wfbpr\") pod \"apiserver-7bbb656c7d-m2vrm\" (UID: \"13f10968-6869-4ee6-975a-29260f3914ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.716118 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f9d412f-3812-4b13-8ba6-e8b79a5cb6f2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-b2qfj\" (UID: \"8f9d412f-3812-4b13-8ba6-e8b79a5cb6f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2qfj" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.747039 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzlbc\" (UniqueName: \"kubernetes.io/projected/cf994913-919d-4268-b606-5b0286e721d0-kube-api-access-pzlbc\") pod \"openshift-controller-manager-operator-756b6f6bc6-62jdj\" (UID: \"cf994913-919d-4268-b606-5b0286e721d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-62jdj" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.767516 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7v9g\" (UniqueName: \"kubernetes.io/projected/a71b41ac-3909-4658-8b06-703e8cdba663-kube-api-access-h7v9g\") pod \"catalog-operator-68c6474976-k7zxp\" (UID: \"a71b41ac-3909-4658-8b06-703e8cdba663\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k7zxp" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.775614 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2qfj" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.786114 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.797342 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-62jdj" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.804464 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nzfkv" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.806110 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c019342-bb58-4587-b4fb-24f0641905b1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8rm5b\" (UID: \"6c019342-bb58-4587-b4fb-24f0641905b1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8rm5b" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.827102 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdnf4\" (UniqueName: \"kubernetes.io/projected/ca84cfba-243f-4342-9550-db6256f0c2f8-kube-api-access-cdnf4\") pod \"olm-operator-6b444d44fb-drbvl\" (UID: \"ca84cfba-243f-4342-9550-db6256f0c2f8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-drbvl" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.830933 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.850123 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.850220 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.863810 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nlxj" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.871001 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.873988 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8rm5b" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.883213 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-drbvl" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.889824 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.891802 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kxzv9" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.905298 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k7zxp" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.925570 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z88h4\" (UniqueName: \"kubernetes.io/projected/3c6809b6-c67a-45cf-b251-f20b62790313-kube-api-access-z88h4\") pod \"marketplace-operator-79b997595-d47ng\" (UID: \"3c6809b6-c67a-45cf-b251-f20b62790313\") " pod="openshift-marketplace/marketplace-operator-79b997595-d47ng" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.930109 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/084d14a7-1f30-49b0-9e05-3db228f5087c-images\") pod \"machine-config-operator-74547568cd-pz47m\" (UID: \"084d14a7-1f30-49b0-9e05-3db228f5087c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz47m" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.930635 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbjl5\" (UniqueName: \"kubernetes.io/projected/41447db8-0fe6-4772-8bbb-12a68ba33f1e-kube-api-access-xbjl5\") pod \"downloads-7954f5f757-tjszx\" (UID: \"41447db8-0fe6-4772-8bbb-12a68ba33f1e\") " pod="openshift-console/downloads-7954f5f757-tjszx" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.931092 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59780c25-821e-416b-9437-6b78b8487221-config-volume\") pod \"dns-default-bjcvg\" (UID: \"59780c25-821e-416b-9437-6b78b8487221\") " pod="openshift-dns/dns-default-bjcvg" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.935061 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pkzd\" (UniqueName: \"kubernetes.io/projected/ce7d03d7-4dcc-4d25-909d-a0db72482053-kube-api-access-6pkzd\") pod \"machine-approver-56656f9798-wvqbr\" (UID: \"ce7d03d7-4dcc-4d25-909d-a0db72482053\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvqbr" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.935166 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d47ng" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.963550 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxthz\" (UniqueName: \"kubernetes.io/projected/e283eb17-df64-4565-8f82-a8afac963d4c-kube-api-access-cxthz\") pod \"collect-profiles-29426625-n2qvr\" (UID: \"e283eb17-df64-4565-8f82-a8afac963d4c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426625-n2qvr" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.967474 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxzqx\" (UniqueName: \"kubernetes.io/projected/e4b485d4-889f-4a16-8d5d-ebef14fdeb98-kube-api-access-jxzqx\") pod \"machine-config-controller-84d6567774-j864x\" (UID: \"e4b485d4-889f-4a16-8d5d-ebef14fdeb98\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j864x" Dec 13 03:46:58 crc kubenswrapper[4766]: I1213 03:46:58.986132 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f91707aa-5388-47ac-8bed-37b6e6450118-bound-sa-token\") pod \"ingress-operator-5b745b69d9-pfb6w\" (UID: \"f91707aa-5388-47ac-8bed-37b6e6450118\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pfb6w" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.008639 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17fbe4f6-c1a3-4e9f-b01c-0645f26b8ebe-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vpns8\" (UID: \"17fbe4f6-c1a3-4e9f-b01c-0645f26b8ebe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vpns8" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.030974 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tgtp\" (UniqueName: \"kubernetes.io/projected/1ac742ec-fd68-4c89-890f-b6117573eb88-kube-api-access-5tgtp\") pod \"package-server-manager-789f6589d5-rw6wb\" (UID: \"1ac742ec-fd68-4c89-890f-b6117573eb88\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rw6wb" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.035571 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j864x" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.045942 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26gzk\" (UniqueName: \"kubernetes.io/projected/a08f722d-f913-4c66-8b1f-5ad1285884cb-kube-api-access-26gzk\") pod \"router-default-5444994796-zxh2f\" (UID: \"a08f722d-f913-4c66-8b1f-5ad1285884cb\") " pod="openshift-ingress/router-default-5444994796-zxh2f" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.070731 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0aa74d23-cdf2-4758-82ff-aaf0017693c3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-sdjlm\" (UID: \"0aa74d23-cdf2-4758-82ff-aaf0017693c3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdjlm" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.072837 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tjszx" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.089070 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrvtc\" (UniqueName: \"kubernetes.io/projected/c3ad1798-0983-4d00-ae6d-aef7143647f3-kube-api-access-xrvtc\") pod \"control-plane-machine-set-operator-78cbb6b69f-l9g79\" (UID: \"c3ad1798-0983-4d00-ae6d-aef7143647f3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l9g79" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.122273 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mfbw\" (UniqueName: \"kubernetes.io/projected/f91707aa-5388-47ac-8bed-37b6e6450118-kube-api-access-2mfbw\") pod \"ingress-operator-5b745b69d9-pfb6w\" (UID: \"f91707aa-5388-47ac-8bed-37b6e6450118\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pfb6w" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.127603 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnv85\" (UniqueName: \"kubernetes.io/projected/4831deef-ffff-4e1d-9b1d-57903b35ae2b-kube-api-access-vnv85\") pod \"service-ca-9c57cc56f-dh99j\" (UID: \"4831deef-ffff-4e1d-9b1d-57903b35ae2b\") " pod="openshift-service-ca/service-ca-9c57cc56f-dh99j" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.152087 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr7v9\" (UniqueName: \"kubernetes.io/projected/8763ae63-aa9e-4f5f-8e81-015aaa58e3a2-kube-api-access-mr7v9\") pod \"service-ca-operator-777779d784-zchph\" (UID: \"8763ae63-aa9e-4f5f-8e81-015aaa58e3a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zchph" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.256834 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvqbr" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.257467 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426625-n2qvr" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.258582 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zchph" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.259096 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l9g79" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.268331 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vpns8" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.275632 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rw6wb" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.286000 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c52p7\" (UniqueName: \"kubernetes.io/projected/59780c25-821e-416b-9437-6b78b8487221-kube-api-access-c52p7\") pod \"dns-default-bjcvg\" (UID: \"59780c25-821e-416b-9437-6b78b8487221\") " pod="openshift-dns/dns-default-bjcvg" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.289003 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-zxh2f" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.329798 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-dh99j" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.331031 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdjlm" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.375294 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pfb6w" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.445717 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-bjcvg" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.576746 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nh5p\" (UniqueName: \"kubernetes.io/projected/70b87a0c-be0a-447b-b901-2b327774f436-kube-api-access-7nh5p\") pod \"multus-admission-controller-857f4d67dd-4b7sz\" (UID: \"70b87a0c-be0a-447b-b901-2b327774f436\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4b7sz" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.587182 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbfhl\" (UniqueName: \"kubernetes.io/projected/c74ca261-ba68-4ce7-bf90-200deb0b2b11-kube-api-access-dbfhl\") pod \"kube-storage-version-migrator-operator-b67b599dd-lwjph\" (UID: \"c74ca261-ba68-4ce7-bf90-200deb0b2b11\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwjph" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.589311 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pktsq\" (UniqueName: \"kubernetes.io/projected/084d14a7-1f30-49b0-9e05-3db228f5087c-kube-api-access-pktsq\") pod \"machine-config-operator-74547568cd-pz47m\" (UID: \"084d14a7-1f30-49b0-9e05-3db228f5087c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz47m" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.674089 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78d9k\" (UniqueName: \"kubernetes.io/projected/deca563a-35ff-43bc-8251-f3b7c80581b2-kube-api-access-78d9k\") pod \"csi-hostpathplugin-j5v8m\" (UID: \"deca563a-35ff-43bc-8251-f3b7c80581b2\") " pod="hostpath-provisioner/csi-hostpathplugin-j5v8m" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.675345 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c8fe9b78-26e6-41df-9826-905771389a60-metrics-tls\") pod \"dns-operator-744455d44c-6x9j4\" (UID: \"c8fe9b78-26e6-41df-9826-905771389a60\") " pod="openshift-dns-operator/dns-operator-744455d44c-6x9j4" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.675407 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frwcb\" (UniqueName: \"kubernetes.io/projected/10191553-e60f-47dc-a4f9-b74d8bcc7a58-kube-api-access-frwcb\") pod \"machine-config-server-f9jwz\" (UID: \"10191553-e60f-47dc-a4f9-b74d8bcc7a58\") " pod="openshift-machine-config-operator/machine-config-server-f9jwz" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.675502 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/342d355f-91c2-4c74-b72e-fa4164314fe1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.675630 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/342d355f-91c2-4c74-b72e-fa4164314fe1-registry-tls\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.675696 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/342d355f-91c2-4c74-b72e-fa4164314fe1-registry-certificates\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.675992 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlkq6\" (UniqueName: \"kubernetes.io/projected/c8fe9b78-26e6-41df-9826-905771389a60-kube-api-access-vlkq6\") pod \"dns-operator-744455d44c-6x9j4\" (UID: \"c8fe9b78-26e6-41df-9826-905771389a60\") " pod="openshift-dns-operator/dns-operator-744455d44c-6x9j4" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.676049 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/10191553-e60f-47dc-a4f9-b74d8bcc7a58-certs\") pod \"machine-config-server-f9jwz\" (UID: \"10191553-e60f-47dc-a4f9-b74d8bcc7a58\") " pod="openshift-machine-config-operator/machine-config-server-f9jwz" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.676255 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.676455 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws6tp\" (UniqueName: \"kubernetes.io/projected/342d355f-91c2-4c74-b72e-fa4164314fe1-kube-api-access-ws6tp\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.676530 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/342d355f-91c2-4c74-b72e-fa4164314fe1-bound-sa-token\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.676604 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/342d355f-91c2-4c74-b72e-fa4164314fe1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.676630 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/342d355f-91c2-4c74-b72e-fa4164314fe1-trusted-ca\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.676734 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/10191553-e60f-47dc-a4f9-b74d8bcc7a58-node-bootstrap-token\") pod \"machine-config-server-f9jwz\" (UID: \"10191553-e60f-47dc-a4f9-b74d8bcc7a58\") " pod="openshift-machine-config-operator/machine-config-server-f9jwz" Dec 13 03:46:59 crc kubenswrapper[4766]: E1213 03:46:59.682580 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:00.182560272 +0000 UTC m=+151.692493236 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.695706 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-4b7sz" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.723999 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz47m" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.765223 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-j5v8m" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.782962 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.783290 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgmgs\" (UniqueName: \"kubernetes.io/projected/5e5d8bff-9cca-40f0-a1b3-1ff04b1a972d-kube-api-access-kgmgs\") pod \"ingress-canary-8ckg8\" (UID: \"5e5d8bff-9cca-40f0-a1b3-1ff04b1a972d\") " pod="openshift-ingress-canary/ingress-canary-8ckg8" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.783367 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c8fe9b78-26e6-41df-9826-905771389a60-metrics-tls\") pod \"dns-operator-744455d44c-6x9j4\" (UID: \"c8fe9b78-26e6-41df-9826-905771389a60\") " pod="openshift-dns-operator/dns-operator-744455d44c-6x9j4" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.783388 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frwcb\" (UniqueName: \"kubernetes.io/projected/10191553-e60f-47dc-a4f9-b74d8bcc7a58-kube-api-access-frwcb\") pod \"machine-config-server-f9jwz\" (UID: \"10191553-e60f-47dc-a4f9-b74d8bcc7a58\") " pod="openshift-machine-config-operator/machine-config-server-f9jwz" Dec 13 03:46:59 crc kubenswrapper[4766]: E1213 03:46:59.783515 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:00.283479825 +0000 UTC m=+151.793412799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.783566 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/342d355f-91c2-4c74-b72e-fa4164314fe1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.783665 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/342d355f-91c2-4c74-b72e-fa4164314fe1-registry-tls\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.783779 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/342d355f-91c2-4c74-b72e-fa4164314fe1-registry-certificates\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.783859 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlkq6\" (UniqueName: \"kubernetes.io/projected/c8fe9b78-26e6-41df-9826-905771389a60-kube-api-access-vlkq6\") pod \"dns-operator-744455d44c-6x9j4\" (UID: \"c8fe9b78-26e6-41df-9826-905771389a60\") " pod="openshift-dns-operator/dns-operator-744455d44c-6x9j4" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.783911 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/10191553-e60f-47dc-a4f9-b74d8bcc7a58-certs\") pod \"machine-config-server-f9jwz\" (UID: \"10191553-e60f-47dc-a4f9-b74d8bcc7a58\") " pod="openshift-machine-config-operator/machine-config-server-f9jwz" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.784051 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:46:59 crc kubenswrapper[4766]: E1213 03:46:59.784476 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:00.284468254 +0000 UTC m=+151.794401218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.784590 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e5d8bff-9cca-40f0-a1b3-1ff04b1a972d-cert\") pod \"ingress-canary-8ckg8\" (UID: \"5e5d8bff-9cca-40f0-a1b3-1ff04b1a972d\") " pod="openshift-ingress-canary/ingress-canary-8ckg8" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.790351 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/342d355f-91c2-4c74-b72e-fa4164314fe1-registry-certificates\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.792584 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws6tp\" (UniqueName: \"kubernetes.io/projected/342d355f-91c2-4c74-b72e-fa4164314fe1-kube-api-access-ws6tp\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.794011 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/342d355f-91c2-4c74-b72e-fa4164314fe1-bound-sa-token\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.795871 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-697gs\" (UniqueName: \"kubernetes.io/projected/13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81-kube-api-access-697gs\") pod \"etcd-operator-b45778765-mrdmc\" (UID: \"13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mrdmc" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.796855 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/342d355f-91c2-4c74-b72e-fa4164314fe1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.796923 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/342d355f-91c2-4c74-b72e-fa4164314fe1-trusted-ca\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.797587 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/342d355f-91c2-4c74-b72e-fa4164314fe1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.801924 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/10191553-e60f-47dc-a4f9-b74d8bcc7a58-node-bootstrap-token\") pod \"machine-config-server-f9jwz\" (UID: \"10191553-e60f-47dc-a4f9-b74d8bcc7a58\") " pod="openshift-machine-config-operator/machine-config-server-f9jwz" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.807832 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c8fe9b78-26e6-41df-9826-905771389a60-metrics-tls\") pod \"dns-operator-744455d44c-6x9j4\" (UID: \"c8fe9b78-26e6-41df-9826-905771389a60\") " pod="openshift-dns-operator/dns-operator-744455d44c-6x9j4" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.818412 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/342d355f-91c2-4c74-b72e-fa4164314fe1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.820633 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/10191553-e60f-47dc-a4f9-b74d8bcc7a58-certs\") pod \"machine-config-server-f9jwz\" (UID: \"10191553-e60f-47dc-a4f9-b74d8bcc7a58\") " pod="openshift-machine-config-operator/machine-config-server-f9jwz" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.831465 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws6tp\" (UniqueName: \"kubernetes.io/projected/342d355f-91c2-4c74-b72e-fa4164314fe1-kube-api-access-ws6tp\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.832191 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/342d355f-91c2-4c74-b72e-fa4164314fe1-registry-tls\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.845905 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwjph" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.850948 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frwcb\" (UniqueName: \"kubernetes.io/projected/10191553-e60f-47dc-a4f9-b74d8bcc7a58-kube-api-access-frwcb\") pod \"machine-config-server-f9jwz\" (UID: \"10191553-e60f-47dc-a4f9-b74d8bcc7a58\") " pod="openshift-machine-config-operator/machine-config-server-f9jwz" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.903295 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.903732 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e5d8bff-9cca-40f0-a1b3-1ff04b1a972d-cert\") pod \"ingress-canary-8ckg8\" (UID: \"5e5d8bff-9cca-40f0-a1b3-1ff04b1a972d\") " pod="openshift-ingress-canary/ingress-canary-8ckg8" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.903950 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgmgs\" (UniqueName: \"kubernetes.io/projected/5e5d8bff-9cca-40f0-a1b3-1ff04b1a972d-kube-api-access-kgmgs\") pod \"ingress-canary-8ckg8\" (UID: \"5e5d8bff-9cca-40f0-a1b3-1ff04b1a972d\") " pod="openshift-ingress-canary/ingress-canary-8ckg8" Dec 13 03:46:59 crc kubenswrapper[4766]: E1213 03:46:59.904749 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:00.404704994 +0000 UTC m=+151.914637958 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.909948 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/342d355f-91c2-4c74-b72e-fa4164314fe1-trusted-ca\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.913282 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e5d8bff-9cca-40f0-a1b3-1ff04b1a972d-cert\") pod \"ingress-canary-8ckg8\" (UID: \"5e5d8bff-9cca-40f0-a1b3-1ff04b1a972d\") " pod="openshift-ingress-canary/ingress-canary-8ckg8" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.917797 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlkq6\" (UniqueName: \"kubernetes.io/projected/c8fe9b78-26e6-41df-9826-905771389a60-kube-api-access-vlkq6\") pod \"dns-operator-744455d44c-6x9j4\" (UID: \"c8fe9b78-26e6-41df-9826-905771389a60\") " pod="openshift-dns-operator/dns-operator-744455d44c-6x9j4" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.930289 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgmgs\" (UniqueName: \"kubernetes.io/projected/5e5d8bff-9cca-40f0-a1b3-1ff04b1a972d-kube-api-access-kgmgs\") pod \"ingress-canary-8ckg8\" (UID: \"5e5d8bff-9cca-40f0-a1b3-1ff04b1a972d\") " pod="openshift-ingress-canary/ingress-canary-8ckg8" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.930876 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/342d355f-91c2-4c74-b72e-fa4164314fe1-bound-sa-token\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.978145 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/10191553-e60f-47dc-a4f9-b74d8bcc7a58-node-bootstrap-token\") pod \"machine-config-server-f9jwz\" (UID: \"10191553-e60f-47dc-a4f9-b74d8bcc7a58\") " pod="openshift-machine-config-operator/machine-config-server-f9jwz" Dec 13 03:46:59 crc kubenswrapper[4766]: I1213 03:46:59.985995 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-mrdmc" Dec 13 03:47:00 crc kubenswrapper[4766]: I1213 03:47:00.008772 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-f9jwz" Dec 13 03:47:00 crc kubenswrapper[4766]: I1213 03:47:00.010182 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:00 crc kubenswrapper[4766]: I1213 03:47:00.034634 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-6x9j4" Dec 13 03:47:00 crc kubenswrapper[4766]: I1213 03:47:00.071998 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8ckg8" Dec 13 03:47:00 crc kubenswrapper[4766]: E1213 03:47:00.083044 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:00.583008819 +0000 UTC m=+152.092941783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:00 crc kubenswrapper[4766]: I1213 03:47:00.115524 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:00 crc kubenswrapper[4766]: E1213 03:47:00.115937 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:00.615918455 +0000 UTC m=+152.125851419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:00 crc kubenswrapper[4766]: I1213 03:47:00.135280 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rqt6m" event={"ID":"768b7312-75b7-4135-92ef-dbf8a089efb3","Type":"ContainerStarted","Data":"12408ef3ef50441f7df32efd5f907fa3962252ae094b92875a1f974e11e4327e"} Dec 13 03:47:00 crc kubenswrapper[4766]: I1213 03:47:00.156952 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" event={"ID":"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a","Type":"ContainerStarted","Data":"233cae5fdbbef271099572cc9f46f4e02df9787b26f61f577adb6203dc172224"} Dec 13 03:47:00 crc kubenswrapper[4766]: I1213 03:47:00.159417 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wrg86" event={"ID":"7b288e53-1206-4832-8d01-94dd9d33f9dd","Type":"ContainerStarted","Data":"a84e6677d8b2ef6dbca8429a3798c90a4c86bc85770ec7e628ff58a6071f1878"} Dec 13 03:47:00 crc kubenswrapper[4766]: I1213 03:47:00.174273 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-2lcjl" event={"ID":"654b71d2-08e7-4c5a-b27c-a6fb253b845c","Type":"ContainerStarted","Data":"5d578dbd38b671d4d0858f7c4526f15421a552fd7a817f2adfa16d88008a7c74"} Dec 13 03:47:00 crc kubenswrapper[4766]: I1213 03:47:00.175038 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-2lcjl" Dec 13 03:47:00 crc kubenswrapper[4766]: I1213 03:47:00.183774 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-kw69h" event={"ID":"434785a5-a04b-42f1-8f70-d12238df0eff","Type":"ContainerStarted","Data":"3c003394bb32ae69264359d87334b355c2656d4adb9d0d258823769012e3f5d5"} Dec 13 03:47:00 crc kubenswrapper[4766]: I1213 03:47:00.217812 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:00 crc kubenswrapper[4766]: E1213 03:47:00.230729 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:00.730699485 +0000 UTC m=+152.240632449 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:00 crc kubenswrapper[4766]: I1213 03:47:00.262019 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm"] Dec 13 03:47:00 crc kubenswrapper[4766]: I1213 03:47:00.319576 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:00 crc kubenswrapper[4766]: E1213 03:47:00.319965 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:00.819917954 +0000 UTC m=+152.329850918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:00 crc kubenswrapper[4766]: I1213 03:47:00.320188 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:00 crc kubenswrapper[4766]: E1213 03:47:00.321001 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:00.820960125 +0000 UTC m=+152.330893089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:00 crc kubenswrapper[4766]: I1213 03:47:00.421500 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:00 crc kubenswrapper[4766]: E1213 03:47:00.421813 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:00.921755074 +0000 UTC m=+152.431688038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:00 crc kubenswrapper[4766]: I1213 03:47:00.422686 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:00 crc kubenswrapper[4766]: E1213 03:47:00.423269 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:00.923251908 +0000 UTC m=+152.433184872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:00 crc kubenswrapper[4766]: I1213 03:47:00.585524 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:00 crc kubenswrapper[4766]: E1213 03:47:00.586465 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:01.086419999 +0000 UTC m=+152.596352963 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:00 crc kubenswrapper[4766]: I1213 03:47:00.687963 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:00 crc kubenswrapper[4766]: E1213 03:47:00.688784 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:01.188748203 +0000 UTC m=+152.698681247 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:00 crc kubenswrapper[4766]: I1213 03:47:00.790069 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:00 crc kubenswrapper[4766]: E1213 03:47:00.790476 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:01.290457499 +0000 UTC m=+152.800390463 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:00 crc kubenswrapper[4766]: W1213 03:47:00.795914 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce7d03d7_4dcc_4d25_909d_a0db72482053.slice/crio-0dcb57ae0fe41cdd25bc67a01144ba58b7678affda234e522713fcc5a44582a1 WatchSource:0}: Error finding container 0dcb57ae0fe41cdd25bc67a01144ba58b7678affda234e522713fcc5a44582a1: Status 404 returned error can't find the container with id 0dcb57ae0fe41cdd25bc67a01144ba58b7678affda234e522713fcc5a44582a1 Dec 13 03:47:00 crc kubenswrapper[4766]: I1213 03:47:00.892091 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:00 crc kubenswrapper[4766]: E1213 03:47:00.893063 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:01.393041541 +0000 UTC m=+152.902974505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:00 crc kubenswrapper[4766]: I1213 03:47:00.994492 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:00 crc kubenswrapper[4766]: E1213 03:47:00.994699 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:01.494665904 +0000 UTC m=+153.004598868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:00 crc kubenswrapper[4766]: I1213 03:47:00.995177 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:00 crc kubenswrapper[4766]: E1213 03:47:00.995792 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:01.495774157 +0000 UTC m=+153.005707121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:01 crc kubenswrapper[4766]: I1213 03:47:01.096203 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:01 crc kubenswrapper[4766]: E1213 03:47:01.097839 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:01.597798132 +0000 UTC m=+153.107731096 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:01 crc kubenswrapper[4766]: I1213 03:47:01.197767 4766 patch_prober.go:28] interesting pod/console-operator-58897d9998-2lcjl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 03:47:01 crc kubenswrapper[4766]: I1213 03:47:01.197880 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-2lcjl" podUID="654b71d2-08e7-4c5a-b27c-a6fb253b845c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 13 03:47:01 crc kubenswrapper[4766]: I1213 03:47:01.200159 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:01 crc kubenswrapper[4766]: E1213 03:47:01.201150 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:01.701127516 +0000 UTC m=+153.211060480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:01 crc kubenswrapper[4766]: I1213 03:47:01.242467 4766 generic.go:334] "Generic (PLEG): container finished" podID="d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba" containerID="f8ac3058547b2275b286666059c820cbaca28fe57ad95dbb11afc1e7126653ce" exitCode=0 Dec 13 03:47:01 crc kubenswrapper[4766]: I1213 03:47:01.242969 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-754n6" event={"ID":"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba","Type":"ContainerDied","Data":"f8ac3058547b2275b286666059c820cbaca28fe57ad95dbb11afc1e7126653ce"} Dec 13 03:47:01 crc kubenswrapper[4766]: I1213 03:47:01.247692 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rqt6m" event={"ID":"768b7312-75b7-4135-92ef-dbf8a089efb3","Type":"ContainerStarted","Data":"5f2486215ce72b0e14384345ba12492765bfcccd8c3dac603fb1c5e41c567a7f"} Dec 13 03:47:01 crc kubenswrapper[4766]: I1213 03:47:01.251346 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" event={"ID":"13f10968-6869-4ee6-975a-29260f3914ba","Type":"ContainerStarted","Data":"60b89e7aa53507cad344240b156eb3bc07d17d5efe95e2f1f57b9b5003a6f9fe"} Dec 13 03:47:01 crc kubenswrapper[4766]: I1213 03:47:01.256620 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-zxh2f" event={"ID":"a08f722d-f913-4c66-8b1f-5ad1285884cb","Type":"ContainerStarted","Data":"0698ee9a921e49134f8b9a5ec9c41773035ba9a710aca87f8d3efc7dba24c259"} Dec 13 03:47:01 crc kubenswrapper[4766]: I1213 03:47:01.258871 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvqbr" event={"ID":"ce7d03d7-4dcc-4d25-909d-a0db72482053","Type":"ContainerStarted","Data":"0dcb57ae0fe41cdd25bc67a01144ba58b7678affda234e522713fcc5a44582a1"} Dec 13 03:47:01 crc kubenswrapper[4766]: I1213 03:47:01.303349 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:01 crc kubenswrapper[4766]: E1213 03:47:01.303867 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:01.803823041 +0000 UTC m=+153.313756005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:01 crc kubenswrapper[4766]: I1213 03:47:01.353652 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-2lcjl" podStartSLOduration=128.353620143 podStartE2EDuration="2m8.353620143s" podCreationTimestamp="2025-12-13 03:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:01.347082251 +0000 UTC m=+152.857015235" watchObservedRunningTime="2025-12-13 03:47:01.353620143 +0000 UTC m=+152.863553117" Dec 13 03:47:01 crc kubenswrapper[4766]: I1213 03:47:01.504740 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:01 crc kubenswrapper[4766]: E1213 03:47:01.508061 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:02.008040367 +0000 UTC m=+153.517973331 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:01 crc kubenswrapper[4766]: I1213 03:47:01.606394 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:01 crc kubenswrapper[4766]: E1213 03:47:01.607714 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:02.107684532 +0000 UTC m=+153.617617496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:01 crc kubenswrapper[4766]: I1213 03:47:01.709233 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:01 crc kubenswrapper[4766]: E1213 03:47:01.709764 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:02.209747299 +0000 UTC m=+153.719680263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:01 crc kubenswrapper[4766]: I1213 03:47:01.789480 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-x546d"] Dec 13 03:47:01 crc kubenswrapper[4766]: I1213 03:47:01.810532 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:01 crc kubenswrapper[4766]: I1213 03:47:01.810576 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8rm5b"] Dec 13 03:47:01 crc kubenswrapper[4766]: E1213 03:47:01.810810 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:02.310767205 +0000 UTC m=+153.820700169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:01 crc kubenswrapper[4766]: I1213 03:47:01.811035 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:01 crc kubenswrapper[4766]: E1213 03:47:01.811843 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:02.311833536 +0000 UTC m=+153.821766510 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:01 crc kubenswrapper[4766]: I1213 03:47:01.824028 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d47ng"] Dec 13 03:47:01 crc kubenswrapper[4766]: I1213 03:47:01.912522 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:01 crc kubenswrapper[4766]: E1213 03:47:01.912674 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:02.412651396 +0000 UTC m=+153.922584350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:01 crc kubenswrapper[4766]: I1213 03:47:01.912971 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:01 crc kubenswrapper[4766]: E1213 03:47:01.913374 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:02.413361597 +0000 UTC m=+153.923294561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:02 crc kubenswrapper[4766]: I1213 03:47:02.016778 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:02 crc kubenswrapper[4766]: E1213 03:47:02.017106 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:02.517069252 +0000 UTC m=+154.027002216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:02 crc kubenswrapper[4766]: I1213 03:47:02.017631 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:02 crc kubenswrapper[4766]: E1213 03:47:02.018029 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:02.51802087 +0000 UTC m=+154.027953834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:02 crc kubenswrapper[4766]: W1213 03:47:02.108537 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c6809b6_c67a_45cf_b251_f20b62790313.slice/crio-bc16cadfd51a66dd20a680355103a5fca50dae53b80186e7ce82b56f10892baf WatchSource:0}: Error finding container bc16cadfd51a66dd20a680355103a5fca50dae53b80186e7ce82b56f10892baf: Status 404 returned error can't find the container with id bc16cadfd51a66dd20a680355103a5fca50dae53b80186e7ce82b56f10892baf Dec 13 03:47:02 crc kubenswrapper[4766]: I1213 03:47:02.124159 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:02 crc kubenswrapper[4766]: E1213 03:47:02.127717 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:02.627687589 +0000 UTC m=+154.137620553 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:02 crc kubenswrapper[4766]: I1213 03:47:02.226557 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:02 crc kubenswrapper[4766]: E1213 03:47:02.227052 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:02.727035416 +0000 UTC m=+154.236968380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:02 crc kubenswrapper[4766]: I1213 03:47:02.253018 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-2lcjl" Dec 13 03:47:02 crc kubenswrapper[4766]: I1213 03:47:02.427685 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:02 crc kubenswrapper[4766]: E1213 03:47:02.428093 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:02.928072978 +0000 UTC m=+154.438005952 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:02 crc kubenswrapper[4766]: I1213 03:47:02.481967 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" event={"ID":"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a","Type":"ContainerStarted","Data":"f908d52b65e3fdb94e922b78d11b009b195f7302142cf7a20c6d8facb7e60100"} Dec 13 03:47:02 crc kubenswrapper[4766]: I1213 03:47:02.491422 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:47:02 crc kubenswrapper[4766]: I1213 03:47:02.495840 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-f9jwz" event={"ID":"10191553-e60f-47dc-a4f9-b74d8bcc7a58","Type":"ContainerStarted","Data":"540e1e69e062e300cebed808f06b95ca03855dcd9a5880dafa18b7d1db86471b"} Dec 13 03:47:02 crc kubenswrapper[4766]: I1213 03:47:02.506704 4766 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-l2gzj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" start-of-body= Dec 13 03:47:02 crc kubenswrapper[4766]: I1213 03:47:02.506783 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" Dec 13 03:47:02 crc kubenswrapper[4766]: I1213 03:47:02.508243 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d47ng" event={"ID":"3c6809b6-c67a-45cf-b251-f20b62790313","Type":"ContainerStarted","Data":"bc16cadfd51a66dd20a680355103a5fca50dae53b80186e7ce82b56f10892baf"} Dec 13 03:47:02 crc kubenswrapper[4766]: I1213 03:47:02.516657 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" podStartSLOduration=129.516624188 podStartE2EDuration="2m9.516624188s" podCreationTimestamp="2025-12-13 03:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:02.515254628 +0000 UTC m=+154.025187592" watchObservedRunningTime="2025-12-13 03:47:02.516624188 +0000 UTC m=+154.026557152" Dec 13 03:47:02 crc kubenswrapper[4766]: I1213 03:47:02.532130 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:02 crc kubenswrapper[4766]: E1213 03:47:02.533818 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:03.033800343 +0000 UTC m=+154.543733417 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:02 crc kubenswrapper[4766]: I1213 03:47:02.543691 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x546d" event={"ID":"fc132ae0-d7bd-4064-89ad-4f9a57e76369","Type":"ContainerStarted","Data":"0a9b2615d32d7546a1a8c93cfd3f24d417d3f866fcb48ff5ea10994e62523d34"} Dec 13 03:47:02 crc kubenswrapper[4766]: I1213 03:47:02.545061 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-kw69h" event={"ID":"434785a5-a04b-42f1-8f70-d12238df0eff","Type":"ContainerStarted","Data":"69dd3621c1be1ed8557b18787c43d8bd054c63606baa9d404a3a363c689b02b3"} Dec 13 03:47:02 crc kubenswrapper[4766]: I1213 03:47:02.551918 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8rm5b" event={"ID":"6c019342-bb58-4587-b4fb-24f0641905b1","Type":"ContainerStarted","Data":"9f0a5013da4262a736ea476a62d471340f5eb7eaa5206b4814970b01cc5941c0"} Dec 13 03:47:02 crc kubenswrapper[4766]: I1213 03:47:02.634055 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:02 crc kubenswrapper[4766]: E1213 03:47:02.634632 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:03.134605292 +0000 UTC m=+154.644538256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:02 crc kubenswrapper[4766]: I1213 03:47:02.634727 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:02 crc kubenswrapper[4766]: E1213 03:47:02.636562 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:03.136548339 +0000 UTC m=+154.646481303 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:02 crc kubenswrapper[4766]: I1213 03:47:02.773823 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:02 crc kubenswrapper[4766]: E1213 03:47:02.774860 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:03.274818658 +0000 UTC m=+154.784751622 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:02 crc kubenswrapper[4766]: I1213 03:47:02.775086 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:02 crc kubenswrapper[4766]: E1213 03:47:02.784201 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:03.284168412 +0000 UTC m=+154.794101376 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:02 crc kubenswrapper[4766]: I1213 03:47:02.877017 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:02 crc kubenswrapper[4766]: E1213 03:47:02.877505 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:03.377486602 +0000 UTC m=+154.887419566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:02 crc kubenswrapper[4766]: I1213 03:47:02.897905 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-kw69h" podStartSLOduration=129.897876921 podStartE2EDuration="2m9.897876921s" podCreationTimestamp="2025-12-13 03:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:02.705234436 +0000 UTC m=+154.215167400" watchObservedRunningTime="2025-12-13 03:47:02.897876921 +0000 UTC m=+154.407809905" Dec 13 03:47:02 crc kubenswrapper[4766]: I1213 03:47:02.898409 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rqt6m" podStartSLOduration=129.898399906 podStartE2EDuration="2m9.898399906s" podCreationTimestamp="2025-12-13 03:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:02.896219592 +0000 UTC m=+154.406152576" watchObservedRunningTime="2025-12-13 03:47:02.898399906 +0000 UTC m=+154.408332900" Dec 13 03:47:02 crc kubenswrapper[4766]: I1213 03:47:02.979568 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:02 crc kubenswrapper[4766]: E1213 03:47:02.980171 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:03.480146106 +0000 UTC m=+154.990079250 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:03 crc kubenswrapper[4766]: I1213 03:47:03.080132 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:03 crc kubenswrapper[4766]: E1213 03:47:03.080440 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:03.580385479 +0000 UTC m=+155.090318603 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:03 crc kubenswrapper[4766]: I1213 03:47:03.080599 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:03 crc kubenswrapper[4766]: E1213 03:47:03.081026 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:03.581005937 +0000 UTC m=+155.090938901 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:03 crc kubenswrapper[4766]: I1213 03:47:03.181373 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:03 crc kubenswrapper[4766]: E1213 03:47:03.181708 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:03.681688363 +0000 UTC m=+155.191621317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:03 crc kubenswrapper[4766]: I1213 03:47:03.192711 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-62jdj"] Dec 13 03:47:03 crc kubenswrapper[4766]: W1213 03:47:03.210405 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf994913_919d_4268_b606_5b0286e721d0.slice/crio-d9f76cf391182686803167da87d7c66d05fd05535bd94cde5d2febfb532e3bfe WatchSource:0}: Error finding container d9f76cf391182686803167da87d7c66d05fd05535bd94cde5d2febfb532e3bfe: Status 404 returned error can't find the container with id d9f76cf391182686803167da87d7c66d05fd05535bd94cde5d2febfb532e3bfe Dec 13 03:47:03 crc kubenswrapper[4766]: I1213 03:47:03.237312 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4nlxj"] Dec 13 03:47:03 crc kubenswrapper[4766]: I1213 03:47:03.282793 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:03 crc kubenswrapper[4766]: E1213 03:47:03.283394 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:03.783364679 +0000 UTC m=+155.293297823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:03 crc kubenswrapper[4766]: I1213 03:47:03.398129 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:03 crc kubenswrapper[4766]: E1213 03:47:03.399828 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:03.899789017 +0000 UTC m=+155.409721981 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:03 crc kubenswrapper[4766]: I1213 03:47:03.500036 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:03 crc kubenswrapper[4766]: E1213 03:47:03.500871 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:04.000854374 +0000 UTC m=+155.510787338 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:03 crc kubenswrapper[4766]: I1213 03:47:03.795655 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:03 crc kubenswrapper[4766]: E1213 03:47:03.796490 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:04.296456268 +0000 UTC m=+155.806389232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:03 crc kubenswrapper[4766]: I1213 03:47:03.845754 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-754n6" event={"ID":"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba","Type":"ContainerStarted","Data":"b345505d50c7e4a02471974ec86f4b26a89581d310ff82a8b8d9b5b0efb17dd7"} Dec 13 03:47:03 crc kubenswrapper[4766]: I1213 03:47:03.847497 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nlxj" event={"ID":"b486f774-cede-460b-bc98-a89766288e88","Type":"ContainerStarted","Data":"ba0a6d3e6b5f76c8e3c571331466263e0aede6814ecd14b3ed4cffd567349e4f"} Dec 13 03:47:03 crc kubenswrapper[4766]: I1213 03:47:03.848924 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-62jdj" event={"ID":"cf994913-919d-4268-b606-5b0286e721d0","Type":"ContainerStarted","Data":"d9f76cf391182686803167da87d7c66d05fd05535bd94cde5d2febfb532e3bfe"} Dec 13 03:47:03 crc kubenswrapper[4766]: I1213 03:47:03.851795 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wrg86" event={"ID":"7b288e53-1206-4832-8d01-94dd9d33f9dd","Type":"ContainerStarted","Data":"824ebf4f7dfd9d31556c9ef4e9c099ff27c70e2b316f727f30ea667853c0bfae"} Dec 13 03:47:03 crc kubenswrapper[4766]: I1213 03:47:03.904110 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvqbr" event={"ID":"ce7d03d7-4dcc-4d25-909d-a0db72482053","Type":"ContainerStarted","Data":"6b8cdd690e8309abc4a031445a7e87ce9243f1c5bc62af41680eb8e35c12ab8a"} Dec 13 03:47:03 crc kubenswrapper[4766]: I1213 03:47:03.906722 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rqt6m" event={"ID":"768b7312-75b7-4135-92ef-dbf8a089efb3","Type":"ContainerStarted","Data":"987cf1308ad9513d25d2a35da2e7ceb9e8b33a05ec7766fcce4a8a831682ba1e"} Dec 13 03:47:03 crc kubenswrapper[4766]: I1213 03:47:03.914006 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:03 crc kubenswrapper[4766]: E1213 03:47:03.914413 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:04.414395575 +0000 UTC m=+155.924328539 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:03 crc kubenswrapper[4766]: I1213 03:47:03.930722 4766 generic.go:334] "Generic (PLEG): container finished" podID="13f10968-6869-4ee6-975a-29260f3914ba" containerID="bf0348f6b7bb9d94418910ea4d79f7746364f81c85843ca38afe3aecc536bb75" exitCode=0 Dec 13 03:47:03 crc kubenswrapper[4766]: I1213 03:47:03.931759 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" event={"ID":"13f10968-6869-4ee6-975a-29260f3914ba","Type":"ContainerDied","Data":"bf0348f6b7bb9d94418910ea4d79f7746364f81c85843ca38afe3aecc536bb75"} Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.014648 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:04 crc kubenswrapper[4766]: E1213 03:47:04.015195 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:04.515169534 +0000 UTC m=+156.025102518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.018246 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-zxh2f" event={"ID":"a08f722d-f913-4c66-8b1f-5ad1285884cb","Type":"ContainerStarted","Data":"178d576a7cd803bffc79ae32d077a5a5683fa79966d057983190acbba73431d6"} Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.019191 4766 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-l2gzj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" start-of-body= Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.019235 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.027122 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k7zxp"] Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.027190 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-drbvl"] Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.037498 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tjszx"] Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.038665 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rgvbd"] Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.049542 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nzfkv"] Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.069933 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-zxh2f" podStartSLOduration=130.069903791 podStartE2EDuration="2m10.069903791s" podCreationTimestamp="2025-12-13 03:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:04.067911602 +0000 UTC m=+155.577844566" watchObservedRunningTime="2025-12-13 03:47:04.069903791 +0000 UTC m=+155.579836755" Dec 13 03:47:04 crc kubenswrapper[4766]: W1213 03:47:04.089304 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca84cfba_243f_4342_9550_db6256f0c2f8.slice/crio-6d9d9b5b6fcd5d504a824d86764c3fbc496917356a2f08c5d07bae680752d900 WatchSource:0}: Error finding container 6d9d9b5b6fcd5d504a824d86764c3fbc496917356a2f08c5d07bae680752d900: Status 404 returned error can't find the container with id 6d9d9b5b6fcd5d504a824d86764c3fbc496917356a2f08c5d07bae680752d900 Dec 13 03:47:04 crc kubenswrapper[4766]: W1213 03:47:04.092974 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda71b41ac_3909_4658_8b06_703e8cdba663.slice/crio-21c065b7432bbbcbe0fb85ffa5eda8ca44e0113c7804937dbd466c2f0395e626 WatchSource:0}: Error finding container 21c065b7432bbbcbe0fb85ffa5eda8ca44e0113c7804937dbd466c2f0395e626: Status 404 returned error can't find the container with id 21c065b7432bbbcbe0fb85ffa5eda8ca44e0113c7804937dbd466c2f0395e626 Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.118188 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:04 crc kubenswrapper[4766]: E1213 03:47:04.120409 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:04.620393643 +0000 UTC m=+156.130326607 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.225809 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:04 crc kubenswrapper[4766]: E1213 03:47:04.226327 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:04.726300133 +0000 UTC m=+156.236233097 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.328169 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:04 crc kubenswrapper[4766]: E1213 03:47:04.328696 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:04.828677348 +0000 UTC m=+156.338610312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.330627 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-zxh2f" Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.375795 4766 patch_prober.go:28] interesting pod/router-default-5444994796-zxh2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 03:47:04 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Dec 13 03:47:04 crc kubenswrapper[4766]: [+]process-running ok Dec 13 03:47:04 crc kubenswrapper[4766]: healthz check failed Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.375872 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zxh2f" podUID="a08f722d-f913-4c66-8b1f-5ad1285884cb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.429758 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:04 crc kubenswrapper[4766]: E1213 03:47:04.430926 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:04.930867829 +0000 UTC m=+156.440800793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.498161 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rw6wb"] Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.502615 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69"] Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.532770 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:04 crc kubenswrapper[4766]: E1213 03:47:04.533405 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:05.033381088 +0000 UTC m=+156.543314052 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.588776 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwjph"] Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.594718 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdjlm"] Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.608268 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-zchph"] Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.608344 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-mrdmc"] Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.620967 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kxzv9"] Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.625074 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vpns8"] Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.635118 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:04 crc kubenswrapper[4766]: E1213 03:47:04.635381 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:05.135316951 +0000 UTC m=+156.645249915 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.635601 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:04 crc kubenswrapper[4766]: E1213 03:47:04.636496 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:05.136487735 +0000 UTC m=+156.646420699 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.650580 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-6x9j4"] Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.650697 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-4b7sz"] Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.654056 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-bjcvg"] Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.677609 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-pfb6w"] Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.677863 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29426625-n2qvr"] Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.679959 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pz47m"] Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.686359 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2qfj"] Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.686447 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-dh99j"] Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.689219 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l9g79"] Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.690759 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8ckg8"] Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.693292 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-j5v8m"] Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.695635 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-j864x"] Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.697105 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tzfzc"] Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.736395 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:04 crc kubenswrapper[4766]: E1213 03:47:04.736769 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:05.236739619 +0000 UTC m=+156.746672593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:04 crc kubenswrapper[4766]: W1213 03:47:04.816771 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e5d8bff_9cca_40f0_a1b3_1ff04b1a972d.slice/crio-f7d773792ab735d9ccac0c562a49c411c95e35d67bb133938b726615ddbbc36e WatchSource:0}: Error finding container f7d773792ab735d9ccac0c562a49c411c95e35d67bb133938b726615ddbbc36e: Status 404 returned error can't find the container with id f7d773792ab735d9ccac0c562a49c411c95e35d67bb133938b726615ddbbc36e Dec 13 03:47:04 crc kubenswrapper[4766]: W1213 03:47:04.831585 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod084d14a7_1f30_49b0_9e05_3db228f5087c.slice/crio-ebeab44a92cf585910b5ba5bb09a307b531c40649f1ce13c78447d95afc0ab72 WatchSource:0}: Error finding container ebeab44a92cf585910b5ba5bb09a307b531c40649f1ce13c78447d95afc0ab72: Status 404 returned error can't find the container with id ebeab44a92cf585910b5ba5bb09a307b531c40649f1ce13c78447d95afc0ab72 Dec 13 03:47:04 crc kubenswrapper[4766]: W1213 03:47:04.834638 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3ad1798_0983_4d00_ae6d_aef7143647f3.slice/crio-65fdcce6ded87a5ccd28215fe5071fbf6dbb475635730cc947ddc86e247cd749 WatchSource:0}: Error finding container 65fdcce6ded87a5ccd28215fe5071fbf6dbb475635730cc947ddc86e247cd749: Status 404 returned error can't find the container with id 65fdcce6ded87a5ccd28215fe5071fbf6dbb475635730cc947ddc86e247cd749 Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.837700 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:04 crc kubenswrapper[4766]: E1213 03:47:04.838187 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:05.338168357 +0000 UTC m=+156.848101331 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:04 crc kubenswrapper[4766]: W1213 03:47:04.839670 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17fbe4f6_c1a3_4e9f_b01c_0645f26b8ebe.slice/crio-61c76627e33b6c1b136eb47d8a7dae8197d177c6b09b49c16c8f81e92542cf01 WatchSource:0}: Error finding container 61c76627e33b6c1b136eb47d8a7dae8197d177c6b09b49c16c8f81e92542cf01: Status 404 returned error can't find the container with id 61c76627e33b6c1b136eb47d8a7dae8197d177c6b09b49c16c8f81e92542cf01 Dec 13 03:47:04 crc kubenswrapper[4766]: W1213 03:47:04.861302 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe3fb891_3c52_49fc_9b5c_22c7dbde195b.slice/crio-9c4881da0b2ba7ba18f3b8e0edda362af301c82bb40f9c9f405ae7ae55244e40 WatchSource:0}: Error finding container 9c4881da0b2ba7ba18f3b8e0edda362af301c82bb40f9c9f405ae7ae55244e40: Status 404 returned error can't find the container with id 9c4881da0b2ba7ba18f3b8e0edda362af301c82bb40f9c9f405ae7ae55244e40 Dec 13 03:47:04 crc kubenswrapper[4766]: W1213 03:47:04.862451 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode283eb17_df64_4565_8f82_a8afac963d4c.slice/crio-b45357c837bd4b90d52df1299ea263310b9fb6aa66dd795d0f37b252d178e1e0 WatchSource:0}: Error finding container b45357c837bd4b90d52df1299ea263310b9fb6aa66dd795d0f37b252d178e1e0: Status 404 returned error can't find the container with id b45357c837bd4b90d52df1299ea263310b9fb6aa66dd795d0f37b252d178e1e0 Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.938717 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:04 crc kubenswrapper[4766]: E1213 03:47:04.939053 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:05.439009417 +0000 UTC m=+156.948942381 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:04 crc kubenswrapper[4766]: I1213 03:47:04.939412 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:04 crc kubenswrapper[4766]: E1213 03:47:04.939940 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:05.439920594 +0000 UTC m=+156.949853558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.032015 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-6x9j4" event={"ID":"c8fe9b78-26e6-41df-9826-905771389a60","Type":"ContainerStarted","Data":"0e6ff6e1cae52742ca57f40efcb7b107445aa4eede5b01ccc9e968710152dc82"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.035897 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-dh99j" event={"ID":"4831deef-ffff-4e1d-9b1d-57903b35ae2b","Type":"ContainerStarted","Data":"3542dddbdb9e4d38f5407fae130bb5fbf4b724b3c1e7610eab93c1c4238ce938"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.039756 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l9g79" event={"ID":"c3ad1798-0983-4d00-ae6d-aef7143647f3","Type":"ContainerStarted","Data":"65fdcce6ded87a5ccd28215fe5071fbf6dbb475635730cc947ddc86e247cd749"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.040800 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:05 crc kubenswrapper[4766]: E1213 03:47:05.041165 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:05.541104905 +0000 UTC m=+157.051037879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.041285 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rgvbd" event={"ID":"d9ebba92-4a0b-494f-995e-09ebdcd65a6e","Type":"ContainerStarted","Data":"ee3577f2907db51210f8754f4a74fce4e6f93a46e5b41fd6348725a59b3eb3c2"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.041396 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:05 crc kubenswrapper[4766]: E1213 03:47:05.041889 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:05.541866137 +0000 UTC m=+157.051799281 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.043062 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rw6wb" event={"ID":"1ac742ec-fd68-4c89-890f-b6117573eb88","Type":"ContainerStarted","Data":"1750cb7ec34301a0b1b71fc6395aa86679c6faded65c84f6335f55f3e5faa0e6"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.045817 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tjszx" event={"ID":"41447db8-0fe6-4772-8bbb-12a68ba33f1e","Type":"ContainerStarted","Data":"319772b744c1f6f757f29417745f8e699c4183738fa94514e30bd1beac043b2c"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.045874 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tjszx" event={"ID":"41447db8-0fe6-4772-8bbb-12a68ba33f1e","Type":"ContainerStarted","Data":"8e516cedba5e7b453808ccc991acb99bb8f4058f3a155f693580eb01b45911c8"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.050337 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-62jdj" event={"ID":"cf994913-919d-4268-b606-5b0286e721d0","Type":"ContainerStarted","Data":"89adf7f7c0be3a1a6ec71864d86057b9ef05feebf61cbce5328ab9a1bc111ff3"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.051345 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz47m" event={"ID":"084d14a7-1f30-49b0-9e05-3db228f5087c","Type":"ContainerStarted","Data":"ebeab44a92cf585910b5ba5bb09a307b531c40649f1ce13c78447d95afc0ab72"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.052192 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwjph" event={"ID":"c74ca261-ba68-4ce7-bf90-200deb0b2b11","Type":"ContainerStarted","Data":"b10a9a9804afb9fa3343e30f06347a8befe24b1404e2a5fa24c00aeee1077907"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.055753 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-f9jwz" event={"ID":"10191553-e60f-47dc-a4f9-b74d8bcc7a58","Type":"ContainerStarted","Data":"efa5e2a0a371f5e09ceedf4d426325cf8caaa4c759b13a227eb0e363b7fda4cf"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.056372 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-j5v8m" event={"ID":"deca563a-35ff-43bc-8251-f3b7c80581b2","Type":"ContainerStarted","Data":"758262fb79bdc05c9fee96e71d513c02110b1a6a924362b2877e316264a7c647"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.057871 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29426625-n2qvr" event={"ID":"e283eb17-df64-4565-8f82-a8afac963d4c","Type":"ContainerStarted","Data":"b45357c837bd4b90d52df1299ea263310b9fb6aa66dd795d0f37b252d178e1e0"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.076209 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-62jdj" podStartSLOduration=131.076187145 podStartE2EDuration="2m11.076187145s" podCreationTimestamp="2025-12-13 03:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:05.075270898 +0000 UTC m=+156.585203862" watchObservedRunningTime="2025-12-13 03:47:05.076187145 +0000 UTC m=+156.586120109" Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.096160 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-4b7sz" event={"ID":"70b87a0c-be0a-447b-b901-2b327774f436","Type":"ContainerStarted","Data":"0eefd48b2c4881a234b609528b1c0bfdafebb9934c1e3e7d32ac04d11f6333d9"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.109349 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-f9jwz" podStartSLOduration=9.109316837 podStartE2EDuration="9.109316837s" podCreationTimestamp="2025-12-13 03:46:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:05.106372711 +0000 UTC m=+156.616305705" watchObservedRunningTime="2025-12-13 03:47:05.109316837 +0000 UTC m=+156.619249801" Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.118421 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-754n6" event={"ID":"d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba","Type":"ContainerStarted","Data":"7761b21b360f10d2067ac64dbbf445f0c13d23f183612e7764891a47bc2b6d4b"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.133917 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pfb6w" event={"ID":"f91707aa-5388-47ac-8bed-37b6e6450118","Type":"ContainerStarted","Data":"b6666104fec83d6eef30c9c2cd55baa830d3b33d613782ddac909bdd6857c312"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.151853 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:05 crc kubenswrapper[4766]: E1213 03:47:05.153567 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:05.653539506 +0000 UTC m=+157.163472470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.157128 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-754n6" podStartSLOduration=132.157108631 podStartE2EDuration="2m12.157108631s" podCreationTimestamp="2025-12-13 03:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:05.156391519 +0000 UTC m=+156.666324493" watchObservedRunningTime="2025-12-13 03:47:05.157108631 +0000 UTC m=+156.667041595" Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.203761 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k7zxp" event={"ID":"a71b41ac-3909-4658-8b06-703e8cdba663","Type":"ContainerStarted","Data":"21c065b7432bbbcbe0fb85ffa5eda8ca44e0113c7804937dbd466c2f0395e626"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.221608 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8rm5b" event={"ID":"6c019342-bb58-4587-b4fb-24f0641905b1","Type":"ContainerStarted","Data":"0fd5155aa62705fe9ed3de515bdf6822454a755f3458f92ea04e2c2e53b5ac85"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.224098 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8ckg8" event={"ID":"5e5d8bff-9cca-40f0-a1b3-1ff04b1a972d","Type":"ContainerStarted","Data":"f7d773792ab735d9ccac0c562a49c411c95e35d67bb133938b726615ddbbc36e"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.225000 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zchph" event={"ID":"8763ae63-aa9e-4f5f-8e81-015aaa58e3a2","Type":"ContainerStarted","Data":"3cc495fda207c667fb4713f3a6b9c34c644b480a21c1a7b786844581d0d7b0c2"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.226819 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nzfkv" event={"ID":"59e373cc-61e8-4c1a-9734-6a2120179e36","Type":"ContainerStarted","Data":"c6691347b992c54d70de1cd82d27df2aa9b40b80c5508da047c6788bd8664e65"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.227836 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69" event={"ID":"a26b74bc-41ac-4d65-b833-cb269d199ddd","Type":"ContainerStarted","Data":"8889c39d0c6b8bc88cf243170fd590dc166fe5ea3466f653a63977c12b25178a"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.228795 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vpns8" event={"ID":"17fbe4f6-c1a3-4e9f-b01c-0645f26b8ebe","Type":"ContainerStarted","Data":"61c76627e33b6c1b136eb47d8a7dae8197d177c6b09b49c16c8f81e92542cf01"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.231399 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j864x" event={"ID":"e4b485d4-889f-4a16-8d5d-ebef14fdeb98","Type":"ContainerStarted","Data":"5c3d3bb5886582feef8f71a608ddb1857c365b8e71a845fc5236c8630b99e6d5"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.233238 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-drbvl" event={"ID":"ca84cfba-243f-4342-9550-db6256f0c2f8","Type":"ContainerStarted","Data":"6d9d9b5b6fcd5d504a824d86764c3fbc496917356a2f08c5d07bae680752d900"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.235108 4766 generic.go:334] "Generic (PLEG): container finished" podID="b486f774-cede-460b-bc98-a89766288e88" containerID="61d543f29d0148a68dec43cac9056fb0d0dfe39d864702fcca3c782e9b3bc64f" exitCode=0 Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.235155 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nlxj" event={"ID":"b486f774-cede-460b-bc98-a89766288e88","Type":"ContainerDied","Data":"61d543f29d0148a68dec43cac9056fb0d0dfe39d864702fcca3c782e9b3bc64f"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.244571 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8rm5b" podStartSLOduration=131.244550478 podStartE2EDuration="2m11.244550478s" podCreationTimestamp="2025-12-13 03:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:05.242469257 +0000 UTC m=+156.752402221" watchObservedRunningTime="2025-12-13 03:47:05.244550478 +0000 UTC m=+156.754483442" Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.276209 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:05 crc kubenswrapper[4766]: E1213 03:47:05.278287 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:05.778265068 +0000 UTC m=+157.288198032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.296786 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d47ng" event={"ID":"3c6809b6-c67a-45cf-b251-f20b62790313","Type":"ContainerStarted","Data":"0e8f9a87a9111faaf38fccb060fc08a3d716f62346f7aa3c5c393fa6ecc35ae0"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.299228 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-d47ng" Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.299327 4766 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-d47ng container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/healthz\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.299358 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-d47ng" podUID="3c6809b6-c67a-45cf-b251-f20b62790313" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.16:8080/healthz\": dial tcp 10.217.0.16:8080: connect: connection refused" Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.309636 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bjcvg" event={"ID":"59780c25-821e-416b-9437-6b78b8487221","Type":"ContainerStarted","Data":"32b2a676d176271fd52091c4d511a530f64332139f52a78f10a58b61754c75c5"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.312301 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-mrdmc" event={"ID":"13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81","Type":"ContainerStarted","Data":"6061b302d232e926d85dee7bed72a7601e46f7305fd4d52459d23d646c2f4e7c"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.315836 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x546d" event={"ID":"fc132ae0-d7bd-4064-89ad-4f9a57e76369","Type":"ContainerStarted","Data":"292e07dbf5233c904b1ba9972e9a25cca4ae4df9227f4c79e46317ced02d8186"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.319543 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-d47ng" podStartSLOduration=131.319530209 podStartE2EDuration="2m11.319530209s" podCreationTimestamp="2025-12-13 03:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:05.318988853 +0000 UTC m=+156.828921817" watchObservedRunningTime="2025-12-13 03:47:05.319530209 +0000 UTC m=+156.829463173" Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.321209 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdjlm" event={"ID":"0aa74d23-cdf2-4758-82ff-aaf0017693c3","Type":"ContainerStarted","Data":"e3a2de7ead6a9023edcf68d63f829f901712a1febedf428a0e2b0bafe58a4394"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.333312 4766 patch_prober.go:28] interesting pod/router-default-5444994796-zxh2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 03:47:05 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Dec 13 03:47:05 crc kubenswrapper[4766]: [+]process-running ok Dec 13 03:47:05 crc kubenswrapper[4766]: healthz check failed Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.333391 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zxh2f" podUID="a08f722d-f913-4c66-8b1f-5ad1285884cb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.340197 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2qfj" event={"ID":"8f9d412f-3812-4b13-8ba6-e8b79a5cb6f2","Type":"ContainerStarted","Data":"976bafcc66e492f0846788dc0bae6c81fb52dd387c817f70a84145a9b976887f"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.342992 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kxzv9" event={"ID":"867694b5-7481-462e-b143-379b53b0ad6e","Type":"ContainerStarted","Data":"7bd94757700fdf0450749e0e85ea074b1064940aed889b29deabc26e1bc2b965"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.344719 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tzfzc" event={"ID":"fe3fb891-3c52-49fc-9b5c-22c7dbde195b","Type":"ContainerStarted","Data":"9c4881da0b2ba7ba18f3b8e0edda362af301c82bb40f9c9f405ae7ae55244e40"} Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.351231 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-x546d" podStartSLOduration=132.351209639 podStartE2EDuration="2m12.351209639s" podCreationTimestamp="2025-12-13 03:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:05.343093231 +0000 UTC m=+156.853026215" watchObservedRunningTime="2025-12-13 03:47:05.351209639 +0000 UTC m=+156.861142603" Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.373479 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-wrg86" podStartSLOduration=131.373457603 podStartE2EDuration="2m11.373457603s" podCreationTimestamp="2025-12-13 03:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:05.370601019 +0000 UTC m=+156.880533983" watchObservedRunningTime="2025-12-13 03:47:05.373457603 +0000 UTC m=+156.883390567" Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.379255 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:05 crc kubenswrapper[4766]: E1213 03:47:05.379890 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:05.87985558 +0000 UTC m=+157.389788544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.482944 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:05 crc kubenswrapper[4766]: E1213 03:47:05.484206 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:05.984186684 +0000 UTC m=+157.494119648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.584031 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:05 crc kubenswrapper[4766]: E1213 03:47:05.584236 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:06.08420062 +0000 UTC m=+157.594133584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.584754 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:05 crc kubenswrapper[4766]: E1213 03:47:05.585309 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:06.085294962 +0000 UTC m=+157.595228126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.741611 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.741818 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.741848 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.741905 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.741936 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:47:05 crc kubenswrapper[4766]: E1213 03:47:05.742103 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:06.242076615 +0000 UTC m=+157.752009579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.743334 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.757062 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.759519 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:47:05 crc kubenswrapper[4766]: I1213 03:47:05.761949 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:05.903848 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:05.904913 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:06 crc kubenswrapper[4766]: E1213 03:47:05.905346 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:06.405328848 +0000 UTC m=+157.915261812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.014844 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.017670 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.018168 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:06 crc kubenswrapper[4766]: E1213 03:47:06.018674 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:06.518653245 +0000 UTC m=+158.028586209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.120743 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:06 crc kubenswrapper[4766]: E1213 03:47:06.121499 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:06.621459494 +0000 UTC m=+158.131392458 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.222459 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:06 crc kubenswrapper[4766]: E1213 03:47:06.222890 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:06.722869251 +0000 UTC m=+158.232802205 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.324855 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:06 crc kubenswrapper[4766]: E1213 03:47:06.325342 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:06.825319878 +0000 UTC m=+158.335252842 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.334735 4766 patch_prober.go:28] interesting pod/router-default-5444994796-zxh2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 03:47:06 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Dec 13 03:47:06 crc kubenswrapper[4766]: [+]process-running ok Dec 13 03:47:06 crc kubenswrapper[4766]: healthz check failed Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.334866 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zxh2f" podUID="a08f722d-f913-4c66-8b1f-5ad1285884cb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.427715 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:06 crc kubenswrapper[4766]: E1213 03:47:06.428187 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:06.928151567 +0000 UTC m=+158.438084531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.431171 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nzfkv" event={"ID":"59e373cc-61e8-4c1a-9734-6a2120179e36","Type":"ContainerStarted","Data":"b1b56e0051aa83f23b35c8716adead635b1acc5e51b5e3d68a29f72c2d6789e5"} Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.431865 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-nzfkv" Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.436975 4766 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-nzfkv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.437030 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-nzfkv" podUID="59e373cc-61e8-4c1a-9734-6a2120179e36" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.441313 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdjlm" event={"ID":"0aa74d23-cdf2-4758-82ff-aaf0017693c3","Type":"ContainerStarted","Data":"04b86750850610e389113934b9f31bd5c1ce2b3b97438de3cb6954a0df392b48"} Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.473332 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" event={"ID":"13f10968-6869-4ee6-975a-29260f3914ba","Type":"ContainerStarted","Data":"4448089ec2d7eb78c0f6812aa225055377edfe0577bf57cdb91b4df5c6c1f721"} Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.486756 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-nzfkv" podStartSLOduration=132.486728697 podStartE2EDuration="2m12.486728697s" podCreationTimestamp="2025-12-13 03:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:06.485396338 +0000 UTC m=+157.995329302" watchObservedRunningTime="2025-12-13 03:47:06.486728697 +0000 UTC m=+157.996661651" Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.526592 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69" event={"ID":"a26b74bc-41ac-4d65-b833-cb269d199ddd","Type":"ContainerStarted","Data":"c9fa0a8e90b61fa4b95214c45e56ba393c66a8d86ad42a513322d13e562b3e5b"} Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.528295 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69" Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.529216 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.537688 4766 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-kxg69 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.537786 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69" podUID="a26b74bc-41ac-4d65-b833-cb269d199ddd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Dec 13 03:47:06 crc kubenswrapper[4766]: E1213 03:47:06.538849 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:07.038833346 +0000 UTC m=+158.548766310 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.554739 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvqbr" event={"ID":"ce7d03d7-4dcc-4d25-909d-a0db72482053","Type":"ContainerStarted","Data":"5d47085b772bee9580d869c472825eecfac54adcf39bcbc508332b9f8475d5be"} Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.570288 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j864x" event={"ID":"e4b485d4-889f-4a16-8d5d-ebef14fdeb98","Type":"ContainerStarted","Data":"7d43deac86a4b870455f7db00f94017d8c0b01a22ad7fbe4ca4a26b4f4d96aaa"} Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.595103 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rgvbd" event={"ID":"d9ebba92-4a0b-494f-995e-09ebdcd65a6e","Type":"ContainerStarted","Data":"9976506ae524ac8eeeec31a67a9b6b3952a779eeb95ddfc6a606e3f63163955b"} Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.595412 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sdjlm" podStartSLOduration=132.595387237 podStartE2EDuration="2m12.595387237s" podCreationTimestamp="2025-12-13 03:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:06.538365413 +0000 UTC m=+158.048298397" watchObservedRunningTime="2025-12-13 03:47:06.595387237 +0000 UTC m=+158.105320201" Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.596198 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rgvbd" Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.596224 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" podStartSLOduration=132.596218651 podStartE2EDuration="2m12.596218651s" podCreationTimestamp="2025-12-13 03:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:06.582132568 +0000 UTC m=+158.092065552" watchObservedRunningTime="2025-12-13 03:47:06.596218651 +0000 UTC m=+158.106151615" Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.606479 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvqbr" podStartSLOduration=133.606453092 podStartE2EDuration="2m13.606453092s" podCreationTimestamp="2025-12-13 03:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:06.602372082 +0000 UTC m=+158.112305076" watchObservedRunningTime="2025-12-13 03:47:06.606453092 +0000 UTC m=+158.116386076" Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.608695 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k7zxp" event={"ID":"a71b41ac-3909-4658-8b06-703e8cdba663","Type":"ContainerStarted","Data":"41b1c170e566508abfba18f833812054209b86b45f9e8a834c7e803d1e3d39f1"} Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.609728 4766 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-rgvbd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:5443/healthz\": dial tcp 10.217.0.15:5443: connect: connection refused" start-of-body= Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.609812 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rgvbd" podUID="d9ebba92-4a0b-494f-995e-09ebdcd65a6e" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.15:5443/healthz\": dial tcp 10.217.0.15:5443: connect: connection refused" Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.609994 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k7zxp" Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.614013 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-dh99j" event={"ID":"4831deef-ffff-4e1d-9b1d-57903b35ae2b","Type":"ContainerStarted","Data":"aba79ade13093e73b5e731b320311aa92969b3fd9d12dc210631134fcdd4ba1e"} Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.617958 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-drbvl" event={"ID":"ca84cfba-243f-4342-9550-db6256f0c2f8","Type":"ContainerStarted","Data":"0f617831a0232dbca51685b497f5d30202b111a2a77697923a8634f4c2532a11"} Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.619129 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-drbvl" Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.620328 4766 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-drbvl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.620374 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-drbvl" podUID="ca84cfba-243f-4342-9550-db6256f0c2f8" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.621183 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rw6wb" event={"ID":"1ac742ec-fd68-4c89-890f-b6117573eb88","Type":"ContainerStarted","Data":"b0b214d8bb3eb2105671edbc74e6e4490005bcf5d9b33a7cf3d96e87935fcdbc"} Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.622333 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pfb6w" event={"ID":"f91707aa-5388-47ac-8bed-37b6e6450118","Type":"ContainerStarted","Data":"ae0ac62671615a4102f6a72070ad68610b0b1698bad7e6ba75f0b7ae8a00bb04"} Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.623256 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwjph" event={"ID":"c74ca261-ba68-4ce7-bf90-200deb0b2b11","Type":"ContainerStarted","Data":"835bb345a712cfd38a8ac43bd6d3805584567d3524068565920aea8cda33c70e"} Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.634804 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:06 crc kubenswrapper[4766]: E1213 03:47:06.635703 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:07.13568135 +0000 UTC m=+158.645614314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.642646 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69" podStartSLOduration=132.642623594 podStartE2EDuration="2m12.642623594s" podCreationTimestamp="2025-12-13 03:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:06.639268735 +0000 UTC m=+158.149201699" watchObservedRunningTime="2025-12-13 03:47:06.642623594 +0000 UTC m=+158.152556568" Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.643620 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-tjszx" Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.648885 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-tjszx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.648931 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tjszx" podUID="41447db8-0fe6-4772-8bbb-12a68ba33f1e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.705773 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rgvbd" podStartSLOduration=132.705747157 podStartE2EDuration="2m12.705747157s" podCreationTimestamp="2025-12-13 03:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:06.666981659 +0000 UTC m=+158.176914643" watchObservedRunningTime="2025-12-13 03:47:06.705747157 +0000 UTC m=+158.215680121" Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.736885 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-d47ng" Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.737626 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.758635 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-dh99j" podStartSLOduration=132.758608369 podStartE2EDuration="2m12.758608369s" podCreationTimestamp="2025-12-13 03:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:06.757880888 +0000 UTC m=+158.267813872" watchObservedRunningTime="2025-12-13 03:47:06.758608369 +0000 UTC m=+158.268541333" Dec 13 03:47:06 crc kubenswrapper[4766]: E1213 03:47:06.762209 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:07.262169883 +0000 UTC m=+158.772102847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.773460 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-drbvl" podStartSLOduration=132.773405863 podStartE2EDuration="2m12.773405863s" podCreationTimestamp="2025-12-13 03:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:06.708671773 +0000 UTC m=+158.218604737" watchObservedRunningTime="2025-12-13 03:47:06.773405863 +0000 UTC m=+158.283338827" Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.798392 4766 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-k7zxp container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.798547 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k7zxp" podUID="a71b41ac-3909-4658-8b06-703e8cdba663" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.845306 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:06 crc kubenswrapper[4766]: E1213 03:47:06.846324 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:07.346297493 +0000 UTC m=+158.856230457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.946412 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwjph" podStartSLOduration=132.946382222 podStartE2EDuration="2m12.946382222s" podCreationTimestamp="2025-12-13 03:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:06.864713204 +0000 UTC m=+158.374646168" watchObservedRunningTime="2025-12-13 03:47:06.946382222 +0000 UTC m=+158.456315186" Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.948532 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:06 crc kubenswrapper[4766]: E1213 03:47:06.949013 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:07.448992058 +0000 UTC m=+158.958925022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:06 crc kubenswrapper[4766]: I1213 03:47:06.987302 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k7zxp" podStartSLOduration=132.987279923 podStartE2EDuration="2m12.987279923s" podCreationTimestamp="2025-12-13 03:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:06.952570763 +0000 UTC m=+158.462503737" watchObservedRunningTime="2025-12-13 03:47:06.987279923 +0000 UTC m=+158.497212887" Dec 13 03:47:07 crc kubenswrapper[4766]: I1213 03:47:07.056386 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:07 crc kubenswrapper[4766]: E1213 03:47:07.056958 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:07.556935238 +0000 UTC m=+159.066868192 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:07 crc kubenswrapper[4766]: I1213 03:47:07.073392 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-tjszx" podStartSLOduration=134.07336547 podStartE2EDuration="2m14.07336547s" podCreationTimestamp="2025-12-13 03:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:07.072826194 +0000 UTC m=+158.582759158" watchObservedRunningTime="2025-12-13 03:47:07.07336547 +0000 UTC m=+158.583298454" Dec 13 03:47:07 crc kubenswrapper[4766]: I1213 03:47:07.190048 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:07 crc kubenswrapper[4766]: E1213 03:47:07.190661 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:07.690641513 +0000 UTC m=+159.200574477 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:07 crc kubenswrapper[4766]: I1213 03:47:07.341578 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:07 crc kubenswrapper[4766]: E1213 03:47:07.341759 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:07.841734389 +0000 UTC m=+159.351667353 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:07 crc kubenswrapper[4766]: I1213 03:47:07.342161 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:07 crc kubenswrapper[4766]: E1213 03:47:07.342686 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:07.842678237 +0000 UTC m=+159.352611201 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:07 crc kubenswrapper[4766]: I1213 03:47:07.436576 4766 patch_prober.go:28] interesting pod/router-default-5444994796-zxh2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 03:47:07 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Dec 13 03:47:07 crc kubenswrapper[4766]: [+]process-running ok Dec 13 03:47:07 crc kubenswrapper[4766]: healthz check failed Dec 13 03:47:07 crc kubenswrapper[4766]: I1213 03:47:07.436659 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zxh2f" podUID="a08f722d-f913-4c66-8b1f-5ad1285884cb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 03:47:07 crc kubenswrapper[4766]: I1213 03:47:07.443503 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:07 crc kubenswrapper[4766]: E1213 03:47:07.444236 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:07.944205658 +0000 UTC m=+159.454138622 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:07 crc kubenswrapper[4766]: I1213 03:47:07.554751 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:07 crc kubenswrapper[4766]: E1213 03:47:07.555372 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:08.055349171 +0000 UTC m=+159.565282135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:07 crc kubenswrapper[4766]: I1213 03:47:07.658315 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:07 crc kubenswrapper[4766]: E1213 03:47:07.658886 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:08.158857489 +0000 UTC m=+159.668790453 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:07 crc kubenswrapper[4766]: I1213 03:47:07.684340 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8ckg8" event={"ID":"5e5d8bff-9cca-40f0-a1b3-1ff04b1a972d","Type":"ContainerStarted","Data":"3fcae0d407e1247c0b4f502de3c27d9433aaf293a9ce36ec9310d245eb9e9b0e"} Dec 13 03:47:07 crc kubenswrapper[4766]: I1213 03:47:07.693901 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-4b7sz" event={"ID":"70b87a0c-be0a-447b-b901-2b327774f436","Type":"ContainerStarted","Data":"d787c6ce147c2798fd34842e36f3b14adee77046884a77255beb16876cc23dbe"} Dec 13 03:47:07 crc kubenswrapper[4766]: I1213 03:47:07.706795 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz47m" event={"ID":"084d14a7-1f30-49b0-9e05-3db228f5087c","Type":"ContainerStarted","Data":"ed7d4ade1ae2d7ebb263b65d5bbe7c77db50ea131ad2de1818982b78b4c5eafb"} Dec 13 03:47:07 crc kubenswrapper[4766]: I1213 03:47:07.740868 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-8ckg8" podStartSLOduration=11.740847367 podStartE2EDuration="11.740847367s" podCreationTimestamp="2025-12-13 03:46:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:07.738027014 +0000 UTC m=+159.247959988" watchObservedRunningTime="2025-12-13 03:47:07.740847367 +0000 UTC m=+159.250780331" Dec 13 03:47:07 crc kubenswrapper[4766]: I1213 03:47:07.755146 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j864x" event={"ID":"e4b485d4-889f-4a16-8d5d-ebef14fdeb98","Type":"ContainerStarted","Data":"422b1c2454eab9c3eb79083d2af3731068de9f6fe8cd1cbacd187d8c3bec822f"} Dec 13 03:47:07 crc kubenswrapper[4766]: I1213 03:47:07.770192 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:07 crc kubenswrapper[4766]: E1213 03:47:07.770696 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:08.270674852 +0000 UTC m=+159.780607816 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:07 crc kubenswrapper[4766]: I1213 03:47:07.784986 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bjcvg" event={"ID":"59780c25-821e-416b-9437-6b78b8487221","Type":"ContainerStarted","Data":"3d15da62ce74493dcf6def76fad7c9af9acf2a3c912e9617c81a86b752909cb3"} Dec 13 03:47:07 crc kubenswrapper[4766]: I1213 03:47:07.809271 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j864x" podStartSLOduration=133.809252115 podStartE2EDuration="2m13.809252115s" podCreationTimestamp="2025-12-13 03:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:07.808903125 +0000 UTC m=+159.318836099" watchObservedRunningTime="2025-12-13 03:47:07.809252115 +0000 UTC m=+159.319185079" Dec 13 03:47:07 crc kubenswrapper[4766]: I1213 03:47:07.823452 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-6x9j4" event={"ID":"c8fe9b78-26e6-41df-9826-905771389a60","Type":"ContainerStarted","Data":"71837878ef63d917f92ae0ade33739e35e809e14f1bb59249553a2a9f5cb2bef"} Dec 13 03:47:07 crc kubenswrapper[4766]: I1213 03:47:07.872909 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:47:07 crc kubenswrapper[4766]: I1213 03:47:07.872970 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kxzv9" event={"ID":"867694b5-7481-462e-b143-379b53b0ad6e","Type":"ContainerStarted","Data":"8d5a1ed6ff1fc00a8b796d53bb0f6855045cee429aae9ffb0b2fe5989bbdc72c"} Dec 13 03:47:07 crc kubenswrapper[4766]: I1213 03:47:07.873133 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:47:07 crc kubenswrapper[4766]: I1213 03:47:07.873289 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:07 crc kubenswrapper[4766]: E1213 03:47:07.875015 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:08.374985855 +0000 UTC m=+159.884918959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:07 crc kubenswrapper[4766]: I1213 03:47:07.882864 4766 patch_prober.go:28] interesting pod/apiserver-76f77b778f-754n6 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 13 03:47:07 crc kubenswrapper[4766]: I1213 03:47:07.882940 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-754n6" podUID="d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 13 03:47:07 crc kubenswrapper[4766]: I1213 03:47:07.920298 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zchph" event={"ID":"8763ae63-aa9e-4f5f-8e81-015aaa58e3a2","Type":"ContainerStarted","Data":"4a28a68f79e97cab1356daf9f620f056ff5a4884435add2a475f9a85f311edbd"} Dec 13 03:47:07 crc kubenswrapper[4766]: I1213 03:47:07.972156 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kxzv9" podStartSLOduration=133.972136657 podStartE2EDuration="2m13.972136657s" podCreationTimestamp="2025-12-13 03:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:07.970284133 +0000 UTC m=+159.480217107" watchObservedRunningTime="2025-12-13 03:47:07.972136657 +0000 UTC m=+159.482069621" Dec 13 03:47:07 crc kubenswrapper[4766]: I1213 03:47:07.975128 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:07 crc kubenswrapper[4766]: E1213 03:47:07.975539 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:08.475523067 +0000 UTC m=+159.985456031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:08 crc kubenswrapper[4766]: I1213 03:47:08.017039 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tzfzc" event={"ID":"fe3fb891-3c52-49fc-9b5c-22c7dbde195b","Type":"ContainerStarted","Data":"6d9ff5cf176c982341612ec0a331afdac6c410eaf5303e0430195e4f0cdb9635"} Dec 13 03:47:08 crc kubenswrapper[4766]: I1213 03:47:08.017801 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zchph" podStartSLOduration=134.017783697 podStartE2EDuration="2m14.017783697s" podCreationTimestamp="2025-12-13 03:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:08.0171891 +0000 UTC m=+159.527122074" watchObservedRunningTime="2025-12-13 03:47:08.017783697 +0000 UTC m=+159.527716661" Dec 13 03:47:08 crc kubenswrapper[4766]: I1213 03:47:08.059859 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l9g79" event={"ID":"c3ad1798-0983-4d00-ae6d-aef7143647f3","Type":"ContainerStarted","Data":"f68d1cae163e5b9825755dbe0d762c97e7f38e0e1abdf634b963212c5da8918a"} Dec 13 03:47:08 crc kubenswrapper[4766]: I1213 03:47:08.062315 4766 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-nzfkv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Dec 13 03:47:08 crc kubenswrapper[4766]: I1213 03:47:08.062367 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-nzfkv" podUID="59e373cc-61e8-4c1a-9734-6a2120179e36" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Dec 13 03:47:08 crc kubenswrapper[4766]: I1213 03:47:08.063315 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-tjszx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 13 03:47:08 crc kubenswrapper[4766]: I1213 03:47:08.063363 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tjszx" podUID="41447db8-0fe6-4772-8bbb-12a68ba33f1e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 13 03:47:08 crc kubenswrapper[4766]: I1213 03:47:08.076387 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:08 crc kubenswrapper[4766]: E1213 03:47:08.078278 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:08.578252493 +0000 UTC m=+160.088185467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:08 crc kubenswrapper[4766]: I1213 03:47:08.101009 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tzfzc" podStartSLOduration=135.10098973 podStartE2EDuration="2m15.10098973s" podCreationTimestamp="2025-12-13 03:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:08.099224728 +0000 UTC m=+159.609157702" watchObservedRunningTime="2025-12-13 03:47:08.10098973 +0000 UTC m=+159.610922694" Dec 13 03:47:08 crc kubenswrapper[4766]: I1213 03:47:08.134782 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k7zxp" Dec 13 03:47:08 crc kubenswrapper[4766]: I1213 03:47:08.141862 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-drbvl" Dec 13 03:47:08 crc kubenswrapper[4766]: I1213 03:47:08.194263 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:08 crc kubenswrapper[4766]: I1213 03:47:08.236312 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:47:08 crc kubenswrapper[4766]: E1213 03:47:08.240975 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:08.740933439 +0000 UTC m=+160.250866473 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:08 crc kubenswrapper[4766]: I1213 03:47:08.259397 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l9g79" podStartSLOduration=134.25936541 podStartE2EDuration="2m14.25936541s" podCreationTimestamp="2025-12-13 03:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:08.239003312 +0000 UTC m=+159.748936276" watchObservedRunningTime="2025-12-13 03:47:08.25936541 +0000 UTC m=+159.769298374" Dec 13 03:47:08 crc kubenswrapper[4766]: I1213 03:47:08.318239 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:08 crc kubenswrapper[4766]: E1213 03:47:08.320022 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:08.81999358 +0000 UTC m=+160.329926694 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:08 crc kubenswrapper[4766]: I1213 03:47:08.352356 4766 patch_prober.go:28] interesting pod/router-default-5444994796-zxh2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 03:47:08 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Dec 13 03:47:08 crc kubenswrapper[4766]: [+]process-running ok Dec 13 03:47:08 crc kubenswrapper[4766]: healthz check failed Dec 13 03:47:08 crc kubenswrapper[4766]: I1213 03:47:08.352832 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zxh2f" podUID="a08f722d-f913-4c66-8b1f-5ad1285884cb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 03:47:08 crc kubenswrapper[4766]: I1213 03:47:08.541840 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:08 crc kubenswrapper[4766]: E1213 03:47:08.542350 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:09.042330838 +0000 UTC m=+160.552263802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:08 crc kubenswrapper[4766]: I1213 03:47:08.653324 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:08 crc kubenswrapper[4766]: E1213 03:47:08.654360 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:09.154334996 +0000 UTC m=+160.664267960 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:08 crc kubenswrapper[4766]: I1213 03:47:08.659136 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69" Dec 13 03:47:08 crc kubenswrapper[4766]: I1213 03:47:08.680142 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-x546d" Dec 13 03:47:08 crc kubenswrapper[4766]: I1213 03:47:08.680201 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-x546d" Dec 13 03:47:08 crc kubenswrapper[4766]: I1213 03:47:08.693955 4766 patch_prober.go:28] interesting pod/console-f9d7485db-x546d container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Dec 13 03:47:08 crc kubenswrapper[4766]: I1213 03:47:08.694038 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-x546d" podUID="fc132ae0-d7bd-4064-89ad-4f9a57e76369" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Dec 13 03:47:08 crc kubenswrapper[4766]: I1213 03:47:08.759599 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:08 crc kubenswrapper[4766]: E1213 03:47:08.761038 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:09.261014678 +0000 UTC m=+160.770947642 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:08 crc kubenswrapper[4766]: I1213 03:47:08.831323 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-nzfkv" Dec 13 03:47:08 crc kubenswrapper[4766]: I1213 03:47:08.851312 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" Dec 13 03:47:08 crc kubenswrapper[4766]: I1213 03:47:08.851372 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" Dec 13 03:47:08 crc kubenswrapper[4766]: I1213 03:47:08.861816 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:08 crc kubenswrapper[4766]: E1213 03:47:08.861966 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:09.361936801 +0000 UTC m=+160.871869755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:08 crc kubenswrapper[4766]: I1213 03:47:08.862230 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:08 crc kubenswrapper[4766]: E1213 03:47:08.862696 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:09.362686013 +0000 UTC m=+160.872618977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:08 crc kubenswrapper[4766]: I1213 03:47:08.992087 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:08 crc kubenswrapper[4766]: E1213 03:47:08.992576 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:09.492551546 +0000 UTC m=+161.002484510 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.027117 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.069073 4766 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-rgvbd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.069122 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rgvbd" podUID="d9ebba92-4a0b-494f-995e-09ebdcd65a6e" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.15:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.074945 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-tjszx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.075016 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tjszx" podUID="41447db8-0fe6-4772-8bbb-12a68ba33f1e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.075105 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-tjszx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.075123 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tjszx" podUID="41447db8-0fe6-4772-8bbb-12a68ba33f1e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.103254 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:09 crc kubenswrapper[4766]: E1213 03:47:09.104496 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:09.604478372 +0000 UTC m=+161.114411336 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.141417 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-mrdmc" event={"ID":"13bd5efe-b6fb-49c0-9e16-0ec0ab4b3b81","Type":"ContainerStarted","Data":"1fb4a7ab3e8e6f52587b0cb2770008304d1875f9955af216fa1e2c56d4be9ae0"} Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.187535 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-j5v8m" event={"ID":"deca563a-35ff-43bc-8251-f3b7c80581b2","Type":"ContainerStarted","Data":"4bf9ec25034cd9d2ce04c34159f7488c07e57ad95f11fe36faeb4c1b24acfcd4"} Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.213884 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:09 crc kubenswrapper[4766]: E1213 03:47:09.216015 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:09.715989636 +0000 UTC m=+161.225922610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.266143 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29426625-n2qvr" event={"ID":"e283eb17-df64-4565-8f82-a8afac963d4c","Type":"ContainerStarted","Data":"2b2231b927c555af9acbf62a67c41c603dc75cd79cd54cceab6bb34584de7db8"} Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.318551 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:09 crc kubenswrapper[4766]: E1213 03:47:09.319118 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:09.819095693 +0000 UTC m=+161.329028657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.320968 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p7x6l"] Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.322317 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p7x6l" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.328718 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kxzv9" event={"ID":"867694b5-7481-462e-b143-379b53b0ad6e","Type":"ContainerStarted","Data":"0b914eb97c1e9cc766168ea9d31fc6a97f984cdad718a75636c9830d62ca13d5"} Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.329489 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-zxh2f" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.337969 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.343818 4766 patch_prober.go:28] interesting pod/router-default-5444994796-zxh2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 03:47:09 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Dec 13 03:47:09 crc kubenswrapper[4766]: [+]process-running ok Dec 13 03:47:09 crc kubenswrapper[4766]: healthz check failed Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.343911 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zxh2f" podUID="a08f722d-f913-4c66-8b1f-5ad1285884cb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.349810 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vpns8" event={"ID":"17fbe4f6-c1a3-4e9f-b01c-0645f26b8ebe","Type":"ContainerStarted","Data":"cd388eef8ad083b0d05b0238fa70fc94610d4596910b9e4bc5037aa4104b7762"} Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.368086 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f1089992cdd3d2ffdd33ccd578fbf40f1c3bb099a34295b32cc368654cfb105a"} Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.368716 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.387815 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"17ea6f0a64f174a273282da946ba5b72a2047fb77987c239af68cdb00d91ac05"} Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.402987 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p7x6l"] Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.413731 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2qfj" event={"ID":"8f9d412f-3812-4b13-8ba6-e8b79a5cb6f2","Type":"ContainerStarted","Data":"e75c4012599f6b281fc60a2650c132655e368af8ff641d9bfb481776f9c5f236"} Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.415455 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rw6wb" event={"ID":"1ac742ec-fd68-4c89-890f-b6117573eb88","Type":"ContainerStarted","Data":"9f08eea80b0a20f14b587336a3ac6795d7f10fece3c2d35b182b5199480d2f69"} Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.415876 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rw6wb" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.428249 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nlxj" event={"ID":"b486f774-cede-460b-bc98-a89766288e88","Type":"ContainerStarted","Data":"cc941d5d54139ce82f40a6417f778bceab020faac616c21e69648b566a778278"} Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.428855 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nlxj" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.447654 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.447954 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/456b5fc9-3dab-40fc-81c5-ab9ea1f110dd-utilities\") pod \"certified-operators-p7x6l\" (UID: \"456b5fc9-3dab-40fc-81c5-ab9ea1f110dd\") " pod="openshift-marketplace/certified-operators-p7x6l" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.447978 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zksbl\" (UniqueName: \"kubernetes.io/projected/456b5fc9-3dab-40fc-81c5-ab9ea1f110dd-kube-api-access-zksbl\") pod \"certified-operators-p7x6l\" (UID: \"456b5fc9-3dab-40fc-81c5-ab9ea1f110dd\") " pod="openshift-marketplace/certified-operators-p7x6l" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.448166 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/456b5fc9-3dab-40fc-81c5-ab9ea1f110dd-catalog-content\") pod \"certified-operators-p7x6l\" (UID: \"456b5fc9-3dab-40fc-81c5-ab9ea1f110dd\") " pod="openshift-marketplace/certified-operators-p7x6l" Dec 13 03:47:09 crc kubenswrapper[4766]: E1213 03:47:09.449391 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:09.949368518 +0000 UTC m=+161.459301482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.464607 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pfb6w" event={"ID":"f91707aa-5388-47ac-8bed-37b6e6450118","Type":"ContainerStarted","Data":"038f25bdb5274e6b0d6414ba7e783e0da6344c5edbd9d27dbc6d6dc704d0ca95"} Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.478508 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-mrdmc" podStartSLOduration=135.478430051 podStartE2EDuration="2m15.478430051s" podCreationTimestamp="2025-12-13 03:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:09.476515345 +0000 UTC m=+160.986448329" watchObservedRunningTime="2025-12-13 03:47:09.478430051 +0000 UTC m=+160.988363015" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.492633 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6dqhc"] Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.510317 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"906b85f0204ded92f51e99607277e5e563a7c59791e71a5f33395391415f8248"} Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.511380 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6dqhc" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.526059 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6dqhc"] Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.545749 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2vrm" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.550657 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zksbl\" (UniqueName: \"kubernetes.io/projected/456b5fc9-3dab-40fc-81c5-ab9ea1f110dd-kube-api-access-zksbl\") pod \"certified-operators-p7x6l\" (UID: \"456b5fc9-3dab-40fc-81c5-ab9ea1f110dd\") " pod="openshift-marketplace/certified-operators-p7x6l" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.550746 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/456b5fc9-3dab-40fc-81c5-ab9ea1f110dd-utilities\") pod \"certified-operators-p7x6l\" (UID: \"456b5fc9-3dab-40fc-81c5-ab9ea1f110dd\") " pod="openshift-marketplace/certified-operators-p7x6l" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.550805 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.550833 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/456b5fc9-3dab-40fc-81c5-ab9ea1f110dd-catalog-content\") pod \"certified-operators-p7x6l\" (UID: \"456b5fc9-3dab-40fc-81c5-ab9ea1f110dd\") " pod="openshift-marketplace/certified-operators-p7x6l" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.553866 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/456b5fc9-3dab-40fc-81c5-ab9ea1f110dd-catalog-content\") pod \"certified-operators-p7x6l\" (UID: \"456b5fc9-3dab-40fc-81c5-ab9ea1f110dd\") " pod="openshift-marketplace/certified-operators-p7x6l" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.554376 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/456b5fc9-3dab-40fc-81c5-ab9ea1f110dd-utilities\") pod \"certified-operators-p7x6l\" (UID: \"456b5fc9-3dab-40fc-81c5-ab9ea1f110dd\") " pod="openshift-marketplace/certified-operators-p7x6l" Dec 13 03:47:09 crc kubenswrapper[4766]: E1213 03:47:09.555475 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:10.055417432 +0000 UTC m=+161.565350396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.602094 4766 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-rgvbd container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.15:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.602376 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rgvbd" podUID="d9ebba92-4a0b-494f-995e-09ebdcd65a6e" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.15:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.608978 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.620455 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rw6wb" podStartSLOduration=135.62040655 podStartE2EDuration="2m15.62040655s" podCreationTimestamp="2025-12-13 03:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:09.608632124 +0000 UTC m=+161.118565088" watchObservedRunningTime="2025-12-13 03:47:09.62040655 +0000 UTC m=+161.130339504" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.653835 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.656407 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68z9n\" (UniqueName: \"kubernetes.io/projected/f162311d-72df-42b4-b586-7bc1d4945c99-kube-api-access-68z9n\") pod \"community-operators-6dqhc\" (UID: \"f162311d-72df-42b4-b586-7bc1d4945c99\") " pod="openshift-marketplace/community-operators-6dqhc" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.657983 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f162311d-72df-42b4-b586-7bc1d4945c99-utilities\") pod \"community-operators-6dqhc\" (UID: \"f162311d-72df-42b4-b586-7bc1d4945c99\") " pod="openshift-marketplace/community-operators-6dqhc" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.659463 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f162311d-72df-42b4-b586-7bc1d4945c99-catalog-content\") pod \"community-operators-6dqhc\" (UID: \"f162311d-72df-42b4-b586-7bc1d4945c99\") " pod="openshift-marketplace/community-operators-6dqhc" Dec 13 03:47:09 crc kubenswrapper[4766]: E1213 03:47:09.666836 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:10.166800562 +0000 UTC m=+161.676733696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.679471 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hzmqb"] Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.681253 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzmqb" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.693084 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zksbl\" (UniqueName: \"kubernetes.io/projected/456b5fc9-3dab-40fc-81c5-ab9ea1f110dd-kube-api-access-zksbl\") pod \"certified-operators-p7x6l\" (UID: \"456b5fc9-3dab-40fc-81c5-ab9ea1f110dd\") " pod="openshift-marketplace/certified-operators-p7x6l" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.717302 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hzmqb"] Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.757417 4766 patch_prober.go:28] interesting pod/machine-config-daemon-94w9l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.757500 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.758008 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vpns8" podStartSLOduration=135.757981749 podStartE2EDuration="2m15.757981749s" podCreationTimestamp="2025-12-13 03:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:09.756457094 +0000 UTC m=+161.266390068" watchObservedRunningTime="2025-12-13 03:47:09.757981749 +0000 UTC m=+161.267914713" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.760914 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f162311d-72df-42b4-b586-7bc1d4945c99-catalog-content\") pod \"community-operators-6dqhc\" (UID: \"f162311d-72df-42b4-b586-7bc1d4945c99\") " pod="openshift-marketplace/community-operators-6dqhc" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.761004 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35afc352-db00-48b9-b888-6b8b9bc36403-utilities\") pod \"certified-operators-hzmqb\" (UID: \"35afc352-db00-48b9-b888-6b8b9bc36403\") " pod="openshift-marketplace/certified-operators-hzmqb" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.761046 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kstrr\" (UniqueName: \"kubernetes.io/projected/35afc352-db00-48b9-b888-6b8b9bc36403-kube-api-access-kstrr\") pod \"certified-operators-hzmqb\" (UID: \"35afc352-db00-48b9-b888-6b8b9bc36403\") " pod="openshift-marketplace/certified-operators-hzmqb" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.761098 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35afc352-db00-48b9-b888-6b8b9bc36403-catalog-content\") pod \"certified-operators-hzmqb\" (UID: \"35afc352-db00-48b9-b888-6b8b9bc36403\") " pod="openshift-marketplace/certified-operators-hzmqb" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.761137 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.761182 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68z9n\" (UniqueName: \"kubernetes.io/projected/f162311d-72df-42b4-b586-7bc1d4945c99-kube-api-access-68z9n\") pod \"community-operators-6dqhc\" (UID: \"f162311d-72df-42b4-b586-7bc1d4945c99\") " pod="openshift-marketplace/community-operators-6dqhc" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.761271 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f162311d-72df-42b4-b586-7bc1d4945c99-utilities\") pod \"community-operators-6dqhc\" (UID: \"f162311d-72df-42b4-b586-7bc1d4945c99\") " pod="openshift-marketplace/community-operators-6dqhc" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.762024 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f162311d-72df-42b4-b586-7bc1d4945c99-utilities\") pod \"community-operators-6dqhc\" (UID: \"f162311d-72df-42b4-b586-7bc1d4945c99\") " pod="openshift-marketplace/community-operators-6dqhc" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.762342 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f162311d-72df-42b4-b586-7bc1d4945c99-catalog-content\") pod \"community-operators-6dqhc\" (UID: \"f162311d-72df-42b4-b586-7bc1d4945c99\") " pod="openshift-marketplace/community-operators-6dqhc" Dec 13 03:47:09 crc kubenswrapper[4766]: E1213 03:47:09.762826 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:10.26280452 +0000 UTC m=+161.772737484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.818915 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pfb6w" podStartSLOduration=135.818894077 podStartE2EDuration="2m15.818894077s" podCreationTimestamp="2025-12-13 03:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:09.81593131 +0000 UTC m=+161.325864264" watchObservedRunningTime="2025-12-13 03:47:09.818894077 +0000 UTC m=+161.328827041" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.847694 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68z9n\" (UniqueName: \"kubernetes.io/projected/f162311d-72df-42b4-b586-7bc1d4945c99-kube-api-access-68z9n\") pod \"community-operators-6dqhc\" (UID: \"f162311d-72df-42b4-b586-7bc1d4945c99\") " pod="openshift-marketplace/community-operators-6dqhc" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.868642 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.869012 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35afc352-db00-48b9-b888-6b8b9bc36403-utilities\") pod \"certified-operators-hzmqb\" (UID: \"35afc352-db00-48b9-b888-6b8b9bc36403\") " pod="openshift-marketplace/certified-operators-hzmqb" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.869090 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kstrr\" (UniqueName: \"kubernetes.io/projected/35afc352-db00-48b9-b888-6b8b9bc36403-kube-api-access-kstrr\") pod \"certified-operators-hzmqb\" (UID: \"35afc352-db00-48b9-b888-6b8b9bc36403\") " pod="openshift-marketplace/certified-operators-hzmqb" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.869132 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35afc352-db00-48b9-b888-6b8b9bc36403-catalog-content\") pod \"certified-operators-hzmqb\" (UID: \"35afc352-db00-48b9-b888-6b8b9bc36403\") " pod="openshift-marketplace/certified-operators-hzmqb" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.869768 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35afc352-db00-48b9-b888-6b8b9bc36403-catalog-content\") pod \"certified-operators-hzmqb\" (UID: \"35afc352-db00-48b9-b888-6b8b9bc36403\") " pod="openshift-marketplace/certified-operators-hzmqb" Dec 13 03:47:09 crc kubenswrapper[4766]: E1213 03:47:09.869866 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:10.369844973 +0000 UTC m=+161.879777937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.870304 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35afc352-db00-48b9-b888-6b8b9bc36403-utilities\") pod \"certified-operators-hzmqb\" (UID: \"35afc352-db00-48b9-b888-6b8b9bc36403\") " pod="openshift-marketplace/certified-operators-hzmqb" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.875501 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6dqhc" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.886369 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nlxj" podStartSLOduration=136.886332527 podStartE2EDuration="2m16.886332527s" podCreationTimestamp="2025-12-13 03:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:09.880222968 +0000 UTC m=+161.390155962" watchObservedRunningTime="2025-12-13 03:47:09.886332527 +0000 UTC m=+161.396265491" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.887517 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wb4lm"] Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.889116 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wb4lm" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.908804 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wb4lm"] Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.920644 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kstrr\" (UniqueName: \"kubernetes.io/projected/35afc352-db00-48b9-b888-6b8b9bc36403-kube-api-access-kstrr\") pod \"certified-operators-hzmqb\" (UID: \"35afc352-db00-48b9-b888-6b8b9bc36403\") " pod="openshift-marketplace/certified-operators-hzmqb" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.990163 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p7x6l" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.990535 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16b3386a-f52e-47b8-a0cb-172dd34f4761-catalog-content\") pod \"community-operators-wb4lm\" (UID: \"16b3386a-f52e-47b8-a0cb-172dd34f4761\") " pod="openshift-marketplace/community-operators-wb4lm" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.990573 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16b3386a-f52e-47b8-a0cb-172dd34f4761-utilities\") pod \"community-operators-wb4lm\" (UID: \"16b3386a-f52e-47b8-a0cb-172dd34f4761\") " pod="openshift-marketplace/community-operators-wb4lm" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.990815 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:09 crc kubenswrapper[4766]: I1213 03:47:09.990896 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgvff\" (UniqueName: \"kubernetes.io/projected/16b3386a-f52e-47b8-a0cb-172dd34f4761-kube-api-access-qgvff\") pod \"community-operators-wb4lm\" (UID: \"16b3386a-f52e-47b8-a0cb-172dd34f4761\") " pod="openshift-marketplace/community-operators-wb4lm" Dec 13 03:47:09 crc kubenswrapper[4766]: E1213 03:47:09.991332 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:10.491316838 +0000 UTC m=+162.001249802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.086566 4766 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-rgvbd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.086671 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rgvbd" podUID="d9ebba92-4a0b-494f-995e-09ebdcd65a6e" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.15:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.095664 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.096364 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgvff\" (UniqueName: \"kubernetes.io/projected/16b3386a-f52e-47b8-a0cb-172dd34f4761-kube-api-access-qgvff\") pod \"community-operators-wb4lm\" (UID: \"16b3386a-f52e-47b8-a0cb-172dd34f4761\") " pod="openshift-marketplace/community-operators-wb4lm" Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.096460 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16b3386a-f52e-47b8-a0cb-172dd34f4761-catalog-content\") pod \"community-operators-wb4lm\" (UID: \"16b3386a-f52e-47b8-a0cb-172dd34f4761\") " pod="openshift-marketplace/community-operators-wb4lm" Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.097620 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16b3386a-f52e-47b8-a0cb-172dd34f4761-catalog-content\") pod \"community-operators-wb4lm\" (UID: \"16b3386a-f52e-47b8-a0cb-172dd34f4761\") " pod="openshift-marketplace/community-operators-wb4lm" Dec 13 03:47:10 crc kubenswrapper[4766]: E1213 03:47:10.097688 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:10.596923899 +0000 UTC m=+162.106856863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.097745 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16b3386a-f52e-47b8-a0cb-172dd34f4761-utilities\") pod \"community-operators-wb4lm\" (UID: \"16b3386a-f52e-47b8-a0cb-172dd34f4761\") " pod="openshift-marketplace/community-operators-wb4lm" Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.098079 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16b3386a-f52e-47b8-a0cb-172dd34f4761-utilities\") pod \"community-operators-wb4lm\" (UID: \"16b3386a-f52e-47b8-a0cb-172dd34f4761\") " pod="openshift-marketplace/community-operators-wb4lm" Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.130094 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzmqb" Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.143706 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2qfj" podStartSLOduration=136.143687912 podStartE2EDuration="2m16.143687912s" podCreationTimestamp="2025-12-13 03:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:10.087938575 +0000 UTC m=+161.597871559" watchObservedRunningTime="2025-12-13 03:47:10.143687912 +0000 UTC m=+161.653620876" Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.144718 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgvff\" (UniqueName: \"kubernetes.io/projected/16b3386a-f52e-47b8-a0cb-172dd34f4761-kube-api-access-qgvff\") pod \"community-operators-wb4lm\" (UID: \"16b3386a-f52e-47b8-a0cb-172dd34f4761\") " pod="openshift-marketplace/community-operators-wb4lm" Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.146131 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-6x9j4" podStartSLOduration=136.146122693 podStartE2EDuration="2m16.146122693s" podCreationTimestamp="2025-12-13 03:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:10.142872158 +0000 UTC m=+161.652805132" watchObservedRunningTime="2025-12-13 03:47:10.146122693 +0000 UTC m=+161.656055657" Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.176357 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29426625-n2qvr" podStartSLOduration=130.17633465 podStartE2EDuration="2m10.17633465s" podCreationTimestamp="2025-12-13 03:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:10.176034972 +0000 UTC m=+161.685967946" watchObservedRunningTime="2025-12-13 03:47:10.17633465 +0000 UTC m=+161.686267614" Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.203156 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:10 crc kubenswrapper[4766]: E1213 03:47:10.203728 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:10.703708414 +0000 UTC m=+162.213641378 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.222479 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz47m" podStartSLOduration=136.222454864 podStartE2EDuration="2m16.222454864s" podCreationTimestamp="2025-12-13 03:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:10.220098525 +0000 UTC m=+161.730031519" watchObservedRunningTime="2025-12-13 03:47:10.222454864 +0000 UTC m=+161.732387828" Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.311854 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:10 crc kubenswrapper[4766]: E1213 03:47:10.312036 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:10.812002713 +0000 UTC m=+162.321935687 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.312224 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:10 crc kubenswrapper[4766]: E1213 03:47:10.313140 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:10.813120536 +0000 UTC m=+162.323053500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.342289 4766 patch_prober.go:28] interesting pod/router-default-5444994796-zxh2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 03:47:10 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Dec 13 03:47:10 crc kubenswrapper[4766]: [+]process-running ok Dec 13 03:47:10 crc kubenswrapper[4766]: healthz check failed Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.342388 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zxh2f" podUID="a08f722d-f913-4c66-8b1f-5ad1285884cb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.416006 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wb4lm" Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.417135 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:10 crc kubenswrapper[4766]: E1213 03:47:10.417252 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:10.917227263 +0000 UTC m=+162.427160237 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.417567 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:10 crc kubenswrapper[4766]: E1213 03:47:10.418057 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:10.918049107 +0000 UTC m=+162.427982061 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.519835 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:10 crc kubenswrapper[4766]: E1213 03:47:10.520208 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:11.020175195 +0000 UTC m=+162.530108149 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.520351 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:10 crc kubenswrapper[4766]: E1213 03:47:10.520999 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:11.020980529 +0000 UTC m=+162.530913493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.586642 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-6x9j4" event={"ID":"c8fe9b78-26e6-41df-9826-905771389a60","Type":"ContainerStarted","Data":"3a486026715f639bbcb14465798df6b5d3cd6a7e0972eac199bba5407d0b3ff4"} Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.621372 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:10 crc kubenswrapper[4766]: E1213 03:47:10.621892 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:11.121871501 +0000 UTC m=+162.631804465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.628836 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-4b7sz" event={"ID":"70b87a0c-be0a-447b-b901-2b327774f436","Type":"ContainerStarted","Data":"d8fa3a55a9140240c90ae2d41975b684237fb61cc159dedcc1f1e9f89d311ceb"} Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.664475 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz47m" event={"ID":"084d14a7-1f30-49b0-9e05-3db228f5087c","Type":"ContainerStarted","Data":"5d9de3b164980e35dba3bec0dae42d076dbeebccac9f6276327127705210ed6d"} Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.670006 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ac463b757f9bbf9cf9461619a6933d773f3cf656b2bb863511baec228654e99b"} Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.671954 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"cca69562180bd0642551906e417ca527a57920c339efe8d5504d8d72da3f4f34"} Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.677492 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"634be1fa865bfda366e3278fc238fb099fa207d5fb5fe28ae3ee8efb3d7cbdec"} Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.685358 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-4b7sz" podStartSLOduration=136.685328464 podStartE2EDuration="2m16.685328464s" podCreationTimestamp="2025-12-13 03:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:10.680460511 +0000 UTC m=+162.190393475" watchObservedRunningTime="2025-12-13 03:47:10.685328464 +0000 UTC m=+162.195261428" Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.699547 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bjcvg" event={"ID":"59780c25-821e-416b-9437-6b78b8487221","Type":"ContainerStarted","Data":"feed4e132c83f9202803503f88fc9edcffe4014f526e6fbd36d8433c0c769f63"} Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.729572 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:10 crc kubenswrapper[4766]: E1213 03:47:10.732889 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:11.232853519 +0000 UTC m=+162.742786483 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.762000 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-bjcvg" podStartSLOduration=14.761968464 podStartE2EDuration="14.761968464s" podCreationTimestamp="2025-12-13 03:46:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:10.761858731 +0000 UTC m=+162.271791695" watchObservedRunningTime="2025-12-13 03:47:10.761968464 +0000 UTC m=+162.271901428" Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.843315 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:10 crc kubenswrapper[4766]: E1213 03:47:10.884324 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:11.384258765 +0000 UTC m=+162.894191929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:10 crc kubenswrapper[4766]: I1213 03:47:10.946058 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:10 crc kubenswrapper[4766]: E1213 03:47:10.946555 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:11.446537093 +0000 UTC m=+162.956470057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:11 crc kubenswrapper[4766]: I1213 03:47:11.057146 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:11 crc kubenswrapper[4766]: E1213 03:47:11.058061 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:11.558035517 +0000 UTC m=+163.067968481 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:11 crc kubenswrapper[4766]: I1213 03:47:11.090699 4766 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-rgvbd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 03:47:11 crc kubenswrapper[4766]: I1213 03:47:11.090826 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rgvbd" podUID="d9ebba92-4a0b-494f-995e-09ebdcd65a6e" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.15:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 03:47:11 crc kubenswrapper[4766]: I1213 03:47:11.230163 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:11 crc kubenswrapper[4766]: E1213 03:47:11.230687 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:11.730662115 +0000 UTC m=+163.240595089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:11 crc kubenswrapper[4766]: I1213 03:47:11.337663 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:11 crc kubenswrapper[4766]: E1213 03:47:11.338182 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:11.838146981 +0000 UTC m=+163.348079955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:11 crc kubenswrapper[4766]: I1213 03:47:11.342739 4766 patch_prober.go:28] interesting pod/router-default-5444994796-zxh2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 03:47:11 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Dec 13 03:47:11 crc kubenswrapper[4766]: [+]process-running ok Dec 13 03:47:11 crc kubenswrapper[4766]: healthz check failed Dec 13 03:47:11 crc kubenswrapper[4766]: I1213 03:47:11.342856 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zxh2f" podUID="a08f722d-f913-4c66-8b1f-5ad1285884cb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 03:47:11 crc kubenswrapper[4766]: I1213 03:47:11.455051 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:11 crc kubenswrapper[4766]: E1213 03:47:11.455861 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:11.955832386 +0000 UTC m=+163.465765350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:11 crc kubenswrapper[4766]: I1213 03:47:11.556347 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:11 crc kubenswrapper[4766]: E1213 03:47:11.557080 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:12.057041927 +0000 UTC m=+163.566974901 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:11 crc kubenswrapper[4766]: I1213 03:47:11.854829 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:11 crc kubenswrapper[4766]: E1213 03:47:11.855617 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:12.355587222 +0000 UTC m=+163.865520186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:11 crc kubenswrapper[4766]: I1213 03:47:11.908254 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-bjcvg" Dec 13 03:47:11 crc kubenswrapper[4766]: I1213 03:47:11.936480 4766 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-4nlxj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Dec 13 03:47:11 crc kubenswrapper[4766]: I1213 03:47:11.936874 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nlxj" podUID="b486f774-cede-460b-bc98-a89766288e88" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Dec 13 03:47:12 crc kubenswrapper[4766]: I1213 03:47:12.010194 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:12 crc kubenswrapper[4766]: E1213 03:47:12.012709 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:12.512665484 +0000 UTC m=+164.022598448 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:12 crc kubenswrapper[4766]: I1213 03:47:12.083375 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6dqhc"] Dec 13 03:47:12 crc kubenswrapper[4766]: I1213 03:47:12.128907 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:12 crc kubenswrapper[4766]: E1213 03:47:12.129385 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:12.62936512 +0000 UTC m=+164.139298084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:12 crc kubenswrapper[4766]: I1213 03:47:12.235709 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:12 crc kubenswrapper[4766]: E1213 03:47:12.236495 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:12.736464665 +0000 UTC m=+164.246397639 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:12 crc kubenswrapper[4766]: I1213 03:47:12.236589 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:12 crc kubenswrapper[4766]: E1213 03:47:12.237112 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:12.737100633 +0000 UTC m=+164.247033597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:12 crc kubenswrapper[4766]: I1213 03:47:12.337897 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:12 crc kubenswrapper[4766]: E1213 03:47:12.338831 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:12.838805019 +0000 UTC m=+164.348737983 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:12 crc kubenswrapper[4766]: I1213 03:47:12.365898 4766 patch_prober.go:28] interesting pod/router-default-5444994796-zxh2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 03:47:12 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Dec 13 03:47:12 crc kubenswrapper[4766]: [+]process-running ok Dec 13 03:47:12 crc kubenswrapper[4766]: healthz check failed Dec 13 03:47:12 crc kubenswrapper[4766]: I1213 03:47:12.365976 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zxh2f" podUID="a08f722d-f913-4c66-8b1f-5ad1285884cb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 03:47:12 crc kubenswrapper[4766]: I1213 03:47:12.441164 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:12 crc kubenswrapper[4766]: E1213 03:47:12.441760 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:12.941742921 +0000 UTC m=+164.451675885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:12 crc kubenswrapper[4766]: I1213 03:47:12.566928 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:12 crc kubenswrapper[4766]: E1213 03:47:12.567522 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:13.067497653 +0000 UTC m=+164.577430617 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:12 crc kubenswrapper[4766]: I1213 03:47:12.717673 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:12 crc kubenswrapper[4766]: E1213 03:47:12.718119 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:13.218101625 +0000 UTC m=+164.728034589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:12 crc kubenswrapper[4766]: I1213 03:47:12.823909 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:12 crc kubenswrapper[4766]: E1213 03:47:12.824259 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:13.324240081 +0000 UTC m=+164.834173045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:12 crc kubenswrapper[4766]: I1213 03:47:12.926399 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:12 crc kubenswrapper[4766]: E1213 03:47:12.926832 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:13.426816793 +0000 UTC m=+164.936749757 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.063490 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dqhc" event={"ID":"f162311d-72df-42b4-b586-7bc1d4945c99","Type":"ContainerStarted","Data":"1b13a5d9feb0b99e379077bfdce898afb44558fe3ef292b0f9f3b2fdb20feaa3"} Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.072376 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-j5v8m" event={"ID":"deca563a-35ff-43bc-8251-f3b7c80581b2","Type":"ContainerStarted","Data":"29a1913dc77202fe71dcc1df8c0cf0ca49fadb3bdbf1664c70e3dc07c9cb3524"} Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.101996 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:13 crc kubenswrapper[4766]: E1213 03:47:13.102489 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:13.60246645 +0000 UTC m=+165.112399424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.129603 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fsl6d"] Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.131619 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fsl6d" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.139984 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d95nn"] Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.141663 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d95nn" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.149456 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.150075 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.151260 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.203311 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hzmqb"] Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.205446 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c50096c-0297-4e96-868c-d34cfc326d46-utilities\") pod \"redhat-marketplace-d95nn\" (UID: \"9c50096c-0297-4e96-868c-d34cfc326d46\") " pod="openshift-marketplace/redhat-marketplace-d95nn" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.205502 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf9aebbf-5f0a-4684-b6fc-a85c909dbb34-catalog-content\") pod \"redhat-marketplace-fsl6d\" (UID: \"cf9aebbf-5f0a-4684-b6fc-a85c909dbb34\") " pod="openshift-marketplace/redhat-marketplace-fsl6d" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.205560 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.205597 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c81d9226-937a-4f5f-a8f0-746425fffb1f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c81d9226-937a-4f5f-a8f0-746425fffb1f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.205630 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c50096c-0297-4e96-868c-d34cfc326d46-catalog-content\") pod \"redhat-marketplace-d95nn\" (UID: \"9c50096c-0297-4e96-868c-d34cfc326d46\") " pod="openshift-marketplace/redhat-marketplace-d95nn" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.205700 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2c6z\" (UniqueName: \"kubernetes.io/projected/cf9aebbf-5f0a-4684-b6fc-a85c909dbb34-kube-api-access-n2c6z\") pod \"redhat-marketplace-fsl6d\" (UID: \"cf9aebbf-5f0a-4684-b6fc-a85c909dbb34\") " pod="openshift-marketplace/redhat-marketplace-fsl6d" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.205734 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c81d9226-937a-4f5f-a8f0-746425fffb1f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c81d9226-937a-4f5f-a8f0-746425fffb1f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.205767 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf9aebbf-5f0a-4684-b6fc-a85c909dbb34-utilities\") pod \"redhat-marketplace-fsl6d\" (UID: \"cf9aebbf-5f0a-4684-b6fc-a85c909dbb34\") " pod="openshift-marketplace/redhat-marketplace-fsl6d" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.205798 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfnnp\" (UniqueName: \"kubernetes.io/projected/9c50096c-0297-4e96-868c-d34cfc326d46-kube-api-access-mfnnp\") pod \"redhat-marketplace-d95nn\" (UID: \"9c50096c-0297-4e96-868c-d34cfc326d46\") " pod="openshift-marketplace/redhat-marketplace-d95nn" Dec 13 03:47:13 crc kubenswrapper[4766]: E1213 03:47:13.206265 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:13.706242537 +0000 UTC m=+165.216175561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.293287 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.301753 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.301984 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p7x6l"] Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.306962 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.307228 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c50096c-0297-4e96-868c-d34cfc326d46-catalog-content\") pod \"redhat-marketplace-d95nn\" (UID: \"9c50096c-0297-4e96-868c-d34cfc326d46\") " pod="openshift-marketplace/redhat-marketplace-d95nn" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.307313 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2c6z\" (UniqueName: \"kubernetes.io/projected/cf9aebbf-5f0a-4684-b6fc-a85c909dbb34-kube-api-access-n2c6z\") pod \"redhat-marketplace-fsl6d\" (UID: \"cf9aebbf-5f0a-4684-b6fc-a85c909dbb34\") " pod="openshift-marketplace/redhat-marketplace-fsl6d" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.307341 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c81d9226-937a-4f5f-a8f0-746425fffb1f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c81d9226-937a-4f5f-a8f0-746425fffb1f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.307369 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf9aebbf-5f0a-4684-b6fc-a85c909dbb34-utilities\") pod \"redhat-marketplace-fsl6d\" (UID: \"cf9aebbf-5f0a-4684-b6fc-a85c909dbb34\") " pod="openshift-marketplace/redhat-marketplace-fsl6d" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.307400 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfnnp\" (UniqueName: \"kubernetes.io/projected/9c50096c-0297-4e96-868c-d34cfc326d46-kube-api-access-mfnnp\") pod \"redhat-marketplace-d95nn\" (UID: \"9c50096c-0297-4e96-868c-d34cfc326d46\") " pod="openshift-marketplace/redhat-marketplace-d95nn" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.307442 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c50096c-0297-4e96-868c-d34cfc326d46-utilities\") pod \"redhat-marketplace-d95nn\" (UID: \"9c50096c-0297-4e96-868c-d34cfc326d46\") " pod="openshift-marketplace/redhat-marketplace-d95nn" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.307475 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf9aebbf-5f0a-4684-b6fc-a85c909dbb34-catalog-content\") pod \"redhat-marketplace-fsl6d\" (UID: \"cf9aebbf-5f0a-4684-b6fc-a85c909dbb34\") " pod="openshift-marketplace/redhat-marketplace-fsl6d" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.307513 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c81d9226-937a-4f5f-a8f0-746425fffb1f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c81d9226-937a-4f5f-a8f0-746425fffb1f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 13 03:47:13 crc kubenswrapper[4766]: E1213 03:47:13.308040 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:13.808018415 +0000 UTC m=+165.317951379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.308635 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c81d9226-937a-4f5f-a8f0-746425fffb1f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c81d9226-937a-4f5f-a8f0-746425fffb1f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.308666 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c50096c-0297-4e96-868c-d34cfc326d46-catalog-content\") pod \"redhat-marketplace-d95nn\" (UID: \"9c50096c-0297-4e96-868c-d34cfc326d46\") " pod="openshift-marketplace/redhat-marketplace-d95nn" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.309264 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c50096c-0297-4e96-868c-d34cfc326d46-utilities\") pod \"redhat-marketplace-d95nn\" (UID: \"9c50096c-0297-4e96-868c-d34cfc326d46\") " pod="openshift-marketplace/redhat-marketplace-d95nn" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.309933 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf9aebbf-5f0a-4684-b6fc-a85c909dbb34-catalog-content\") pod \"redhat-marketplace-fsl6d\" (UID: \"cf9aebbf-5f0a-4684-b6fc-a85c909dbb34\") " pod="openshift-marketplace/redhat-marketplace-fsl6d" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.313901 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf9aebbf-5f0a-4684-b6fc-a85c909dbb34-utilities\") pod \"redhat-marketplace-fsl6d\" (UID: \"cf9aebbf-5f0a-4684-b6fc-a85c909dbb34\") " pod="openshift-marketplace/redhat-marketplace-fsl6d" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.322264 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wb4lm"] Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.325574 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d95nn"] Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.359309 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fsl6d"] Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.383650 4766 patch_prober.go:28] interesting pod/router-default-5444994796-zxh2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 03:47:13 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Dec 13 03:47:13 crc kubenswrapper[4766]: [+]process-running ok Dec 13 03:47:13 crc kubenswrapper[4766]: healthz check failed Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.384191 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zxh2f" podUID="a08f722d-f913-4c66-8b1f-5ad1285884cb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.387374 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.399078 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfnnp\" (UniqueName: \"kubernetes.io/projected/9c50096c-0297-4e96-868c-d34cfc326d46-kube-api-access-mfnnp\") pod \"redhat-marketplace-d95nn\" (UID: \"9c50096c-0297-4e96-868c-d34cfc326d46\") " pod="openshift-marketplace/redhat-marketplace-d95nn" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.408559 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2c6z\" (UniqueName: \"kubernetes.io/projected/cf9aebbf-5f0a-4684-b6fc-a85c909dbb34-kube-api-access-n2c6z\") pod \"redhat-marketplace-fsl6d\" (UID: \"cf9aebbf-5f0a-4684-b6fc-a85c909dbb34\") " pod="openshift-marketplace/redhat-marketplace-fsl6d" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.410011 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:13 crc kubenswrapper[4766]: E1213 03:47:13.411063 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:13.91104653 +0000 UTC m=+165.420979494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.431559 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c81d9226-937a-4f5f-a8f0-746425fffb1f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c81d9226-937a-4f5f-a8f0-746425fffb1f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.512567 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:13 crc kubenswrapper[4766]: E1213 03:47:13.513017 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:14.012999842 +0000 UTC m=+165.522932806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.518525 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8d4vh"] Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.519869 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8d4vh" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.533905 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.579844 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.738197 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fsl6d" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.740558 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:13 crc kubenswrapper[4766]: E1213 03:47:13.747343 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:14.247310041 +0000 UTC m=+165.757243005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.759869 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d95nn" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.775697 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8d4vh"] Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.779716 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kq9t5"] Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.782090 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kq9t5" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.862444 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.862664 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8q56\" (UniqueName: \"kubernetes.io/projected/b65de837-4baa-4aff-98cb-2babbdfdb2f5-kube-api-access-f8q56\") pod \"redhat-operators-8d4vh\" (UID: \"b65de837-4baa-4aff-98cb-2babbdfdb2f5\") " pod="openshift-marketplace/redhat-operators-8d4vh" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.862766 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hks6n\" (UniqueName: \"kubernetes.io/projected/116c9e00-53c9-4d49-813e-2f0dd2b24411-kube-api-access-hks6n\") pod \"redhat-operators-kq9t5\" (UID: \"116c9e00-53c9-4d49-813e-2f0dd2b24411\") " pod="openshift-marketplace/redhat-operators-kq9t5" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.862797 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/116c9e00-53c9-4d49-813e-2f0dd2b24411-utilities\") pod \"redhat-operators-kq9t5\" (UID: \"116c9e00-53c9-4d49-813e-2f0dd2b24411\") " pod="openshift-marketplace/redhat-operators-kq9t5" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.862864 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b65de837-4baa-4aff-98cb-2babbdfdb2f5-catalog-content\") pod \"redhat-operators-8d4vh\" (UID: \"b65de837-4baa-4aff-98cb-2babbdfdb2f5\") " pod="openshift-marketplace/redhat-operators-8d4vh" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.862885 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b65de837-4baa-4aff-98cb-2babbdfdb2f5-utilities\") pod \"redhat-operators-8d4vh\" (UID: \"b65de837-4baa-4aff-98cb-2babbdfdb2f5\") " pod="openshift-marketplace/redhat-operators-8d4vh" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.862911 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/116c9e00-53c9-4d49-813e-2f0dd2b24411-catalog-content\") pod \"redhat-operators-kq9t5\" (UID: \"116c9e00-53c9-4d49-813e-2f0dd2b24411\") " pod="openshift-marketplace/redhat-operators-kq9t5" Dec 13 03:47:13 crc kubenswrapper[4766]: E1213 03:47:13.863031 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:14.363008478 +0000 UTC m=+165.872941442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.863498 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kq9t5"] Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.963883 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hks6n\" (UniqueName: \"kubernetes.io/projected/116c9e00-53c9-4d49-813e-2f0dd2b24411-kube-api-access-hks6n\") pod \"redhat-operators-kq9t5\" (UID: \"116c9e00-53c9-4d49-813e-2f0dd2b24411\") " pod="openshift-marketplace/redhat-operators-kq9t5" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.963942 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/116c9e00-53c9-4d49-813e-2f0dd2b24411-utilities\") pod \"redhat-operators-kq9t5\" (UID: \"116c9e00-53c9-4d49-813e-2f0dd2b24411\") " pod="openshift-marketplace/redhat-operators-kq9t5" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.964236 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b65de837-4baa-4aff-98cb-2babbdfdb2f5-catalog-content\") pod \"redhat-operators-8d4vh\" (UID: \"b65de837-4baa-4aff-98cb-2babbdfdb2f5\") " pod="openshift-marketplace/redhat-operators-8d4vh" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.964272 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b65de837-4baa-4aff-98cb-2babbdfdb2f5-utilities\") pod \"redhat-operators-8d4vh\" (UID: \"b65de837-4baa-4aff-98cb-2babbdfdb2f5\") " pod="openshift-marketplace/redhat-operators-8d4vh" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.964303 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/116c9e00-53c9-4d49-813e-2f0dd2b24411-catalog-content\") pod \"redhat-operators-kq9t5\" (UID: \"116c9e00-53c9-4d49-813e-2f0dd2b24411\") " pod="openshift-marketplace/redhat-operators-kq9t5" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.964343 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.964400 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8q56\" (UniqueName: \"kubernetes.io/projected/b65de837-4baa-4aff-98cb-2babbdfdb2f5-kube-api-access-f8q56\") pod \"redhat-operators-8d4vh\" (UID: \"b65de837-4baa-4aff-98cb-2babbdfdb2f5\") " pod="openshift-marketplace/redhat-operators-8d4vh" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.965866 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/116c9e00-53c9-4d49-813e-2f0dd2b24411-utilities\") pod \"redhat-operators-kq9t5\" (UID: \"116c9e00-53c9-4d49-813e-2f0dd2b24411\") " pod="openshift-marketplace/redhat-operators-kq9t5" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.966331 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b65de837-4baa-4aff-98cb-2babbdfdb2f5-catalog-content\") pod \"redhat-operators-8d4vh\" (UID: \"b65de837-4baa-4aff-98cb-2babbdfdb2f5\") " pod="openshift-marketplace/redhat-operators-8d4vh" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.966787 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b65de837-4baa-4aff-98cb-2babbdfdb2f5-utilities\") pod \"redhat-operators-8d4vh\" (UID: \"b65de837-4baa-4aff-98cb-2babbdfdb2f5\") " pod="openshift-marketplace/redhat-operators-8d4vh" Dec 13 03:47:13 crc kubenswrapper[4766]: I1213 03:47:13.967141 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/116c9e00-53c9-4d49-813e-2f0dd2b24411-catalog-content\") pod \"redhat-operators-kq9t5\" (UID: \"116c9e00-53c9-4d49-813e-2f0dd2b24411\") " pod="openshift-marketplace/redhat-operators-kq9t5" Dec 13 03:47:13 crc kubenswrapper[4766]: E1213 03:47:13.978789 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:14.478762796 +0000 UTC m=+165.988695770 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:14 crc kubenswrapper[4766]: I1213 03:47:14.024659 4766 patch_prober.go:28] interesting pod/apiserver-76f77b778f-754n6 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 13 03:47:14 crc kubenswrapper[4766]: [+]log ok Dec 13 03:47:14 crc kubenswrapper[4766]: [+]etcd ok Dec 13 03:47:14 crc kubenswrapper[4766]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 13 03:47:14 crc kubenswrapper[4766]: [+]poststarthook/generic-apiserver-start-informers ok Dec 13 03:47:14 crc kubenswrapper[4766]: [+]poststarthook/max-in-flight-filter ok Dec 13 03:47:14 crc kubenswrapper[4766]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 13 03:47:14 crc kubenswrapper[4766]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 13 03:47:14 crc kubenswrapper[4766]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 13 03:47:14 crc kubenswrapper[4766]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 13 03:47:14 crc kubenswrapper[4766]: [+]poststarthook/project.openshift.io-projectcache ok Dec 13 03:47:14 crc kubenswrapper[4766]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 13 03:47:14 crc kubenswrapper[4766]: [+]poststarthook/openshift.io-startinformers ok Dec 13 03:47:14 crc kubenswrapper[4766]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 13 03:47:14 crc kubenswrapper[4766]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 13 03:47:14 crc kubenswrapper[4766]: livez check failed Dec 13 03:47:14 crc kubenswrapper[4766]: I1213 03:47:14.024801 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-754n6" podUID="d4e7b858-e4fe-4194-a5f9-3f37ec5b06ba" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 03:47:14 crc kubenswrapper[4766]: I1213 03:47:14.065300 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:14 crc kubenswrapper[4766]: E1213 03:47:14.066025 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:14.565901195 +0000 UTC m=+166.075834159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:14 crc kubenswrapper[4766]: I1213 03:47:14.202631 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:14 crc kubenswrapper[4766]: E1213 03:47:14.203581 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:14.703562966 +0000 UTC m=+166.213495930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:14 crc kubenswrapper[4766]: I1213 03:47:14.221200 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8q56\" (UniqueName: \"kubernetes.io/projected/b65de837-4baa-4aff-98cb-2babbdfdb2f5-kube-api-access-f8q56\") pod \"redhat-operators-8d4vh\" (UID: \"b65de837-4baa-4aff-98cb-2babbdfdb2f5\") " pod="openshift-marketplace/redhat-operators-8d4vh" Dec 13 03:47:14 crc kubenswrapper[4766]: I1213 03:47:14.234586 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hks6n\" (UniqueName: \"kubernetes.io/projected/116c9e00-53c9-4d49-813e-2f0dd2b24411-kube-api-access-hks6n\") pod \"redhat-operators-kq9t5\" (UID: \"116c9e00-53c9-4d49-813e-2f0dd2b24411\") " pod="openshift-marketplace/redhat-operators-kq9t5" Dec 13 03:47:14 crc kubenswrapper[4766]: I1213 03:47:14.288672 4766 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 13 03:47:14 crc kubenswrapper[4766]: I1213 03:47:14.305366 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8d4vh" Dec 13 03:47:14 crc kubenswrapper[4766]: I1213 03:47:14.306720 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:14 crc kubenswrapper[4766]: E1213 03:47:14.316489 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:14.807391195 +0000 UTC m=+166.317324169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:14 crc kubenswrapper[4766]: I1213 03:47:14.346983 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb4lm" event={"ID":"16b3386a-f52e-47b8-a0cb-172dd34f4761","Type":"ContainerStarted","Data":"ebbd82bfe21937fc73d1a7edf3560229b34249a02b5e882ed09ee369c511692a"} Dec 13 03:47:14 crc kubenswrapper[4766]: I1213 03:47:14.347108 4766 patch_prober.go:28] interesting pod/router-default-5444994796-zxh2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 03:47:14 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Dec 13 03:47:14 crc kubenswrapper[4766]: [+]process-running ok Dec 13 03:47:14 crc kubenswrapper[4766]: healthz check failed Dec 13 03:47:14 crc kubenswrapper[4766]: I1213 03:47:14.347152 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zxh2f" podUID="a08f722d-f913-4c66-8b1f-5ad1285884cb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 03:47:14 crc kubenswrapper[4766]: I1213 03:47:14.395156 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7x6l" event={"ID":"456b5fc9-3dab-40fc-81c5-ab9ea1f110dd","Type":"ContainerStarted","Data":"c197731db52926c0a92ac5cb94047ffc4f13c61fdc3cb4e13ce8043598dcd839"} Dec 13 03:47:14 crc kubenswrapper[4766]: I1213 03:47:14.441051 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:14 crc kubenswrapper[4766]: E1213 03:47:14.441445 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:14.94141186 +0000 UTC m=+166.451344824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:14 crc kubenswrapper[4766]: I1213 03:47:14.471905 4766 generic.go:334] "Generic (PLEG): container finished" podID="f162311d-72df-42b4-b586-7bc1d4945c99" containerID="c08c7649abe1a4d54b232e131e0e67a724cb5092b87aebc996737e1605b1c219" exitCode=0 Dec 13 03:47:14 crc kubenswrapper[4766]: I1213 03:47:14.472511 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dqhc" event={"ID":"f162311d-72df-42b4-b586-7bc1d4945c99","Type":"ContainerDied","Data":"c08c7649abe1a4d54b232e131e0e67a724cb5092b87aebc996737e1605b1c219"} Dec 13 03:47:14 crc kubenswrapper[4766]: I1213 03:47:14.476495 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 13 03:47:14 crc kubenswrapper[4766]: I1213 03:47:14.495699 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzmqb" event={"ID":"35afc352-db00-48b9-b888-6b8b9bc36403","Type":"ContainerStarted","Data":"e5f8bbe62947f8c0b02043627cf632cd5bc7bb2dec7d8b22b2790c533d5cc52f"} Dec 13 03:47:14 crc kubenswrapper[4766]: I1213 03:47:14.528651 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kq9t5" Dec 13 03:47:14 crc kubenswrapper[4766]: I1213 03:47:14.549923 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:14 crc kubenswrapper[4766]: E1213 03:47:14.551000 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:15.050981416 +0000 UTC m=+166.560914380 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:14 crc kubenswrapper[4766]: I1213 03:47:14.574762 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-j5v8m" event={"ID":"deca563a-35ff-43bc-8251-f3b7c80581b2","Type":"ContainerStarted","Data":"164ce2c7d6907a9e83fb204c4a0616a8179dd5d7bfae0ec5ab5163f0fdd8e0bb"} Dec 13 03:47:14 crc kubenswrapper[4766]: I1213 03:47:14.688510 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:14 crc kubenswrapper[4766]: E1213 03:47:14.688965 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:15.188948307 +0000 UTC m=+166.698881271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:14 crc kubenswrapper[4766]: I1213 03:47:14.791261 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:14 crc kubenswrapper[4766]: E1213 03:47:14.793318 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 03:47:15.29329204 +0000 UTC m=+166.803225014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:14 crc kubenswrapper[4766]: I1213 03:47:14.861573 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nlxj" Dec 13 03:47:14 crc kubenswrapper[4766]: I1213 03:47:14.869294 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:47:14 crc kubenswrapper[4766]: I1213 03:47:14.947965 4766 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-13T03:47:14.288701486Z","Handler":null,"Name":""} Dec 13 03:47:14 crc kubenswrapper[4766]: I1213 03:47:14.948883 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:14 crc kubenswrapper[4766]: E1213 03:47:14.949560 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 03:47:15.449533488 +0000 UTC m=+166.959466452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5fm8f" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 03:47:15 crc kubenswrapper[4766]: I1213 03:47:15.002031 4766 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 13 03:47:15 crc kubenswrapper[4766]: I1213 03:47:15.002112 4766 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 13 03:47:15 crc kubenswrapper[4766]: I1213 03:47:15.050345 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 03:47:15 crc kubenswrapper[4766]: I1213 03:47:15.111124 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 13 03:47:15 crc kubenswrapper[4766]: I1213 03:47:15.156283 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:15 crc kubenswrapper[4766]: I1213 03:47:15.170660 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 13 03:47:15 crc kubenswrapper[4766]: I1213 03:47:15.170743 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:15 crc kubenswrapper[4766]: I1213 03:47:15.528962 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5fm8f\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:15 crc kubenswrapper[4766]: I1213 03:47:15.594932 4766 generic.go:334] "Generic (PLEG): container finished" podID="35afc352-db00-48b9-b888-6b8b9bc36403" containerID="c7362cd8cc7317a068cca58fe1f3c7f129620d35b745b2af3a91b4ed2c1c83e0" exitCode=0 Dec 13 03:47:15 crc kubenswrapper[4766]: I1213 03:47:15.595010 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzmqb" event={"ID":"35afc352-db00-48b9-b888-6b8b9bc36403","Type":"ContainerDied","Data":"c7362cd8cc7317a068cca58fe1f3c7f129620d35b745b2af3a91b4ed2c1c83e0"} Dec 13 03:47:15 crc kubenswrapper[4766]: I1213 03:47:15.599822 4766 generic.go:334] "Generic (PLEG): container finished" podID="16b3386a-f52e-47b8-a0cb-172dd34f4761" containerID="96165b00721064373f74c948d38a08a14ac8f872e9e7a657e2681316a693cb87" exitCode=0 Dec 13 03:47:15 crc kubenswrapper[4766]: I1213 03:47:15.599896 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb4lm" event={"ID":"16b3386a-f52e-47b8-a0cb-172dd34f4761","Type":"ContainerDied","Data":"96165b00721064373f74c948d38a08a14ac8f872e9e7a657e2681316a693cb87"} Dec 13 03:47:15 crc kubenswrapper[4766]: I1213 03:47:15.606674 4766 generic.go:334] "Generic (PLEG): container finished" podID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" containerID="7f463a63294b1047028824a05a7b16c5a6c86e5d6e214890de78854956c964cc" exitCode=0 Dec 13 03:47:15 crc kubenswrapper[4766]: I1213 03:47:15.606740 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7x6l" event={"ID":"456b5fc9-3dab-40fc-81c5-ab9ea1f110dd","Type":"ContainerDied","Data":"7f463a63294b1047028824a05a7b16c5a6c86e5d6e214890de78854956c964cc"} Dec 13 03:47:15 crc kubenswrapper[4766]: I1213 03:47:15.657508 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 13 03:47:15 crc kubenswrapper[4766]: I1213 03:47:15.664728 4766 patch_prober.go:28] interesting pod/router-default-5444994796-zxh2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 03:47:15 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Dec 13 03:47:15 crc kubenswrapper[4766]: [+]process-running ok Dec 13 03:47:15 crc kubenswrapper[4766]: healthz check failed Dec 13 03:47:15 crc kubenswrapper[4766]: I1213 03:47:15.664800 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zxh2f" podUID="a08f722d-f913-4c66-8b1f-5ad1285884cb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 03:47:15 crc kubenswrapper[4766]: I1213 03:47:15.794107 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d95nn"] Dec 13 03:47:15 crc kubenswrapper[4766]: I1213 03:47:15.847605 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:15 crc kubenswrapper[4766]: W1213 03:47:15.978277 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c50096c_0297_4e96_868c_d34cfc326d46.slice/crio-946890c205ce0dc4db884c704885e7114ff78d3d6461b58271cdb13a41e5e910 WatchSource:0}: Error finding container 946890c205ce0dc4db884c704885e7114ff78d3d6461b58271cdb13a41e5e910: Status 404 returned error can't find the container with id 946890c205ce0dc4db884c704885e7114ff78d3d6461b58271cdb13a41e5e910 Dec 13 03:47:16 crc kubenswrapper[4766]: I1213 03:47:16.508955 4766 patch_prober.go:28] interesting pod/router-default-5444994796-zxh2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 03:47:16 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Dec 13 03:47:16 crc kubenswrapper[4766]: [+]process-running ok Dec 13 03:47:16 crc kubenswrapper[4766]: healthz check failed Dec 13 03:47:16 crc kubenswrapper[4766]: I1213 03:47:16.509081 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zxh2f" podUID="a08f722d-f913-4c66-8b1f-5ad1285884cb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 03:47:16 crc kubenswrapper[4766]: I1213 03:47:16.643516 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d95nn" event={"ID":"9c50096c-0297-4e96-868c-d34cfc326d46","Type":"ContainerStarted","Data":"946890c205ce0dc4db884c704885e7114ff78d3d6461b58271cdb13a41e5e910"} Dec 13 03:47:16 crc kubenswrapper[4766]: I1213 03:47:16.814946 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kq9t5"] Dec 13 03:47:16 crc kubenswrapper[4766]: I1213 03:47:16.978560 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8d4vh"] Dec 13 03:47:16 crc kubenswrapper[4766]: W1213 03:47:16.993108 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod116c9e00_53c9_4d49_813e_2f0dd2b24411.slice/crio-a5f248f6495e69cafe2f44f6d11a6470db5d35198102e3478dd4050dca12ff45 WatchSource:0}: Error finding container a5f248f6495e69cafe2f44f6d11a6470db5d35198102e3478dd4050dca12ff45: Status 404 returned error can't find the container with id a5f248f6495e69cafe2f44f6d11a6470db5d35198102e3478dd4050dca12ff45 Dec 13 03:47:17 crc kubenswrapper[4766]: W1213 03:47:17.011204 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb65de837_4baa_4aff_98cb_2babbdfdb2f5.slice/crio-c3a29ff0b6baeb73419c2d155856060e25e14270f0218520d75986cdfb288024 WatchSource:0}: Error finding container c3a29ff0b6baeb73419c2d155856060e25e14270f0218520d75986cdfb288024: Status 404 returned error can't find the container with id c3a29ff0b6baeb73419c2d155856060e25e14270f0218520d75986cdfb288024 Dec 13 03:47:17 crc kubenswrapper[4766]: I1213 03:47:17.309313 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 13 03:47:17 crc kubenswrapper[4766]: I1213 03:47:17.310680 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 13 03:47:17 crc kubenswrapper[4766]: I1213 03:47:17.319124 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 13 03:47:17 crc kubenswrapper[4766]: I1213 03:47:17.320965 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 13 03:47:17 crc kubenswrapper[4766]: I1213 03:47:17.325380 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 13 03:47:17 crc kubenswrapper[4766]: I1213 03:47:17.344740 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fsl6d"] Dec 13 03:47:17 crc kubenswrapper[4766]: I1213 03:47:17.350992 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 13 03:47:17 crc kubenswrapper[4766]: I1213 03:47:17.432809 4766 patch_prober.go:28] interesting pod/router-default-5444994796-zxh2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 03:47:17 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Dec 13 03:47:17 crc kubenswrapper[4766]: [+]process-running ok Dec 13 03:47:17 crc kubenswrapper[4766]: healthz check failed Dec 13 03:47:17 crc kubenswrapper[4766]: I1213 03:47:17.433375 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zxh2f" podUID="a08f722d-f913-4c66-8b1f-5ad1285884cb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 03:47:17 crc kubenswrapper[4766]: I1213 03:47:17.461122 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-bjcvg" Dec 13 03:47:17 crc kubenswrapper[4766]: W1213 03:47:17.479126 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc81d9226_937a_4f5f_a8f0_746425fffb1f.slice/crio-a23d09978d735aae8252ad71fac95beb24211b20c5d8643e47f5d3a1e01b691d WatchSource:0}: Error finding container a23d09978d735aae8252ad71fac95beb24211b20c5d8643e47f5d3a1e01b691d: Status 404 returned error can't find the container with id a23d09978d735aae8252ad71fac95beb24211b20c5d8643e47f5d3a1e01b691d Dec 13 03:47:17 crc kubenswrapper[4766]: I1213 03:47:17.517409 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc576874-5930-44fc-9382-95b4d9aee001-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"bc576874-5930-44fc-9382-95b4d9aee001\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 13 03:47:17 crc kubenswrapper[4766]: I1213 03:47:17.517483 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc576874-5930-44fc-9382-95b4d9aee001-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"bc576874-5930-44fc-9382-95b4d9aee001\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 13 03:47:17 crc kubenswrapper[4766]: I1213 03:47:17.764870 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc576874-5930-44fc-9382-95b4d9aee001-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"bc576874-5930-44fc-9382-95b4d9aee001\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 13 03:47:17 crc kubenswrapper[4766]: I1213 03:47:17.765485 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc576874-5930-44fc-9382-95b4d9aee001-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"bc576874-5930-44fc-9382-95b4d9aee001\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 13 03:47:17 crc kubenswrapper[4766]: I1213 03:47:17.765665 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc576874-5930-44fc-9382-95b4d9aee001-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"bc576874-5930-44fc-9382-95b4d9aee001\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 13 03:47:17 crc kubenswrapper[4766]: I1213 03:47:17.964390 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:47:17 crc kubenswrapper[4766]: I1213 03:47:17.968621 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c81d9226-937a-4f5f-a8f0-746425fffb1f","Type":"ContainerStarted","Data":"a23d09978d735aae8252ad71fac95beb24211b20c5d8643e47f5d3a1e01b691d"} Dec 13 03:47:18 crc kubenswrapper[4766]: I1213 03:47:18.052693 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-754n6" Dec 13 03:47:18 crc kubenswrapper[4766]: I1213 03:47:18.093023 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc576874-5930-44fc-9382-95b4d9aee001-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"bc576874-5930-44fc-9382-95b4d9aee001\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 13 03:47:18 crc kubenswrapper[4766]: I1213 03:47:18.301354 4766 generic.go:334] "Generic (PLEG): container finished" podID="9c50096c-0297-4e96-868c-d34cfc326d46" containerID="136dee99e7d42b2677f001149c83748d753d105eeb950f5c21432d7a801626a4" exitCode=0 Dec 13 03:47:18 crc kubenswrapper[4766]: I1213 03:47:18.302856 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d95nn" event={"ID":"9c50096c-0297-4e96-868c-d34cfc326d46","Type":"ContainerDied","Data":"136dee99e7d42b2677f001149c83748d753d105eeb950f5c21432d7a801626a4"} Dec 13 03:47:18 crc kubenswrapper[4766]: I1213 03:47:18.398597 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 13 03:47:18 crc kubenswrapper[4766]: I1213 03:47:18.405373 4766 patch_prober.go:28] interesting pod/router-default-5444994796-zxh2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 03:47:18 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Dec 13 03:47:18 crc kubenswrapper[4766]: [+]process-running ok Dec 13 03:47:18 crc kubenswrapper[4766]: healthz check failed Dec 13 03:47:18 crc kubenswrapper[4766]: I1213 03:47:18.405466 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zxh2f" podUID="a08f722d-f913-4c66-8b1f-5ad1285884cb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 03:47:18 crc kubenswrapper[4766]: I1213 03:47:18.415627 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8d4vh" event={"ID":"b65de837-4baa-4aff-98cb-2babbdfdb2f5","Type":"ContainerStarted","Data":"c3a29ff0b6baeb73419c2d155856060e25e14270f0218520d75986cdfb288024"} Dec 13 03:47:18 crc kubenswrapper[4766]: I1213 03:47:18.416986 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kq9t5" event={"ID":"116c9e00-53c9-4d49-813e-2f0dd2b24411","Type":"ContainerStarted","Data":"a5f248f6495e69cafe2f44f6d11a6470db5d35198102e3478dd4050dca12ff45"} Dec 13 03:47:18 crc kubenswrapper[4766]: I1213 03:47:18.418035 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fsl6d" event={"ID":"cf9aebbf-5f0a-4684-b6fc-a85c909dbb34","Type":"ContainerStarted","Data":"ba78dde592a2cfc27f8d60762a94e1f7bebe0babdaa260adb0e9cfa9b0d20afe"} Dec 13 03:47:18 crc kubenswrapper[4766]: I1213 03:47:18.421871 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-j5v8m" event={"ID":"deca563a-35ff-43bc-8251-f3b7c80581b2","Type":"ContainerStarted","Data":"8512f4b76424b09ad615c768ea4d0693eec7a8dbcf90378ab22701509804d4a0"} Dec 13 03:47:18 crc kubenswrapper[4766]: I1213 03:47:18.792957 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84c9636d-a525-40e8-bc35-af07ecbdeafc-metrics-certs\") pod \"network-metrics-daemon-qvxrm\" (UID: \"84c9636d-a525-40e8-bc35-af07ecbdeafc\") " pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:47:18 crc kubenswrapper[4766]: I1213 03:47:18.804156 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rgvbd" Dec 13 03:47:18 crc kubenswrapper[4766]: I1213 03:47:18.824043 4766 patch_prober.go:28] interesting pod/console-f9d7485db-x546d container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Dec 13 03:47:18 crc kubenswrapper[4766]: I1213 03:47:18.824150 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-x546d" podUID="fc132ae0-d7bd-4064-89ad-4f9a57e76369" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Dec 13 03:47:18 crc kubenswrapper[4766]: I1213 03:47:18.839632 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-j5v8m" podStartSLOduration=22.839605457 podStartE2EDuration="22.839605457s" podCreationTimestamp="2025-12-13 03:46:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:18.838747612 +0000 UTC m=+170.348680586" watchObservedRunningTime="2025-12-13 03:47:18.839605457 +0000 UTC m=+170.349538421" Dec 13 03:47:18 crc kubenswrapper[4766]: I1213 03:47:18.895700 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84c9636d-a525-40e8-bc35-af07ecbdeafc-metrics-certs\") pod \"network-metrics-daemon-qvxrm\" (UID: \"84c9636d-a525-40e8-bc35-af07ecbdeafc\") " pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:47:19 crc kubenswrapper[4766]: I1213 03:47:19.053987 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvxrm" Dec 13 03:47:19 crc kubenswrapper[4766]: I1213 03:47:19.078064 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5fm8f"] Dec 13 03:47:19 crc kubenswrapper[4766]: I1213 03:47:19.096015 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-tjszx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 13 03:47:19 crc kubenswrapper[4766]: I1213 03:47:19.096169 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tjszx" podUID="41447db8-0fe6-4772-8bbb-12a68ba33f1e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 13 03:47:19 crc kubenswrapper[4766]: I1213 03:47:19.096686 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-tjszx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 13 03:47:19 crc kubenswrapper[4766]: I1213 03:47:19.096707 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tjszx" podUID="41447db8-0fe6-4772-8bbb-12a68ba33f1e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 13 03:47:19 crc kubenswrapper[4766]: I1213 03:47:19.376882 4766 patch_prober.go:28] interesting pod/router-default-5444994796-zxh2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 03:47:19 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Dec 13 03:47:19 crc kubenswrapper[4766]: [+]process-running ok Dec 13 03:47:19 crc kubenswrapper[4766]: healthz check failed Dec 13 03:47:19 crc kubenswrapper[4766]: I1213 03:47:19.376977 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zxh2f" podUID="a08f722d-f913-4c66-8b1f-5ad1285884cb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 03:47:19 crc kubenswrapper[4766]: I1213 03:47:19.563685 4766 generic.go:334] "Generic (PLEG): container finished" podID="b65de837-4baa-4aff-98cb-2babbdfdb2f5" containerID="b0f6b29b253a7d0ba62ff72ddafae66c92c41b1aad72eb4cf955033a6d9a90ed" exitCode=0 Dec 13 03:47:19 crc kubenswrapper[4766]: I1213 03:47:19.563882 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8d4vh" event={"ID":"b65de837-4baa-4aff-98cb-2babbdfdb2f5","Type":"ContainerDied","Data":"b0f6b29b253a7d0ba62ff72ddafae66c92c41b1aad72eb4cf955033a6d9a90ed"} Dec 13 03:47:19 crc kubenswrapper[4766]: I1213 03:47:19.857453 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" event={"ID":"342d355f-91c2-4c74-b72e-fa4164314fe1","Type":"ContainerStarted","Data":"d53cecfb8952a4c87814db71552cd2856b25916f2a6fe12d59348fb943af325f"} Dec 13 03:47:19 crc kubenswrapper[4766]: I1213 03:47:19.863771 4766 generic.go:334] "Generic (PLEG): container finished" podID="116c9e00-53c9-4d49-813e-2f0dd2b24411" containerID="55ee5b63cb5141f373ff835a985330cd173c4e267ac23dc682bf2f468dbf2400" exitCode=0 Dec 13 03:47:20 crc kubenswrapper[4766]: I1213 03:47:19.956853 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kq9t5" event={"ID":"116c9e00-53c9-4d49-813e-2f0dd2b24411","Type":"ContainerDied","Data":"55ee5b63cb5141f373ff835a985330cd173c4e267ac23dc682bf2f468dbf2400"} Dec 13 03:47:20 crc kubenswrapper[4766]: I1213 03:47:20.267488 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 13 03:47:20 crc kubenswrapper[4766]: I1213 03:47:20.349866 4766 patch_prober.go:28] interesting pod/router-default-5444994796-zxh2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 03:47:20 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Dec 13 03:47:20 crc kubenswrapper[4766]: [+]process-running ok Dec 13 03:47:20 crc kubenswrapper[4766]: healthz check failed Dec 13 03:47:20 crc kubenswrapper[4766]: I1213 03:47:20.349937 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zxh2f" podUID="a08f722d-f913-4c66-8b1f-5ad1285884cb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 03:47:20 crc kubenswrapper[4766]: I1213 03:47:20.629199 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-qvxrm"] Dec 13 03:47:20 crc kubenswrapper[4766]: W1213 03:47:20.706029 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84c9636d_a525_40e8_bc35_af07ecbdeafc.slice/crio-ba95c6d93fbe43a18f4702204c314a18cd56ac48aa45e4fe458dc1eb33de56a6 WatchSource:0}: Error finding container ba95c6d93fbe43a18f4702204c314a18cd56ac48aa45e4fe458dc1eb33de56a6: Status 404 returned error can't find the container with id ba95c6d93fbe43a18f4702204c314a18cd56ac48aa45e4fe458dc1eb33de56a6 Dec 13 03:47:20 crc kubenswrapper[4766]: I1213 03:47:20.933048 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"bc576874-5930-44fc-9382-95b4d9aee001","Type":"ContainerStarted","Data":"3e9f8cfb5057f6e552ed6dac1fdb02e172a02c5afd855917089f36dfc0c70f4b"} Dec 13 03:47:21 crc kubenswrapper[4766]: I1213 03:47:21.020853 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c81d9226-937a-4f5f-a8f0-746425fffb1f","Type":"ContainerStarted","Data":"dc18bd716fc4cfdfa28025c37169e1972c97555ae86470900d7b133bd548700d"} Dec 13 03:47:21 crc kubenswrapper[4766]: I1213 03:47:21.040479 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" event={"ID":"342d355f-91c2-4c74-b72e-fa4164314fe1","Type":"ContainerStarted","Data":"bdfbe8c49094e2e5837da53d8773773ce2a52cd549bbb1217200ea950de86eaa"} Dec 13 03:47:21 crc kubenswrapper[4766]: I1213 03:47:21.041767 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:21 crc kubenswrapper[4766]: I1213 03:47:21.045103 4766 generic.go:334] "Generic (PLEG): container finished" podID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" containerID="785ce32cb67032b6e16a79f53c11a4502b38cf6321d3c525a0a6c3670da42d0b" exitCode=0 Dec 13 03:47:21 crc kubenswrapper[4766]: I1213 03:47:21.045228 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fsl6d" event={"ID":"cf9aebbf-5f0a-4684-b6fc-a85c909dbb34","Type":"ContainerDied","Data":"785ce32cb67032b6e16a79f53c11a4502b38cf6321d3c525a0a6c3670da42d0b"} Dec 13 03:47:21 crc kubenswrapper[4766]: I1213 03:47:21.047684 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qvxrm" event={"ID":"84c9636d-a525-40e8-bc35-af07ecbdeafc","Type":"ContainerStarted","Data":"ba95c6d93fbe43a18f4702204c314a18cd56ac48aa45e4fe458dc1eb33de56a6"} Dec 13 03:47:21 crc kubenswrapper[4766]: I1213 03:47:21.070393 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=9.07037046 podStartE2EDuration="9.07037046s" podCreationTimestamp="2025-12-13 03:47:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:21.06933603 +0000 UTC m=+172.579269014" watchObservedRunningTime="2025-12-13 03:47:21.07037046 +0000 UTC m=+172.580303424" Dec 13 03:47:21 crc kubenswrapper[4766]: I1213 03:47:21.199280 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" podStartSLOduration=147.19912287 podStartE2EDuration="2m27.19912287s" podCreationTimestamp="2025-12-13 03:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:21.197545964 +0000 UTC m=+172.707479128" watchObservedRunningTime="2025-12-13 03:47:21.19912287 +0000 UTC m=+172.709055834" Dec 13 03:47:21 crc kubenswrapper[4766]: I1213 03:47:21.335410 4766 patch_prober.go:28] interesting pod/router-default-5444994796-zxh2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 03:47:21 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Dec 13 03:47:21 crc kubenswrapper[4766]: [+]process-running ok Dec 13 03:47:21 crc kubenswrapper[4766]: healthz check failed Dec 13 03:47:21 crc kubenswrapper[4766]: I1213 03:47:21.335998 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zxh2f" podUID="a08f722d-f913-4c66-8b1f-5ad1285884cb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 03:47:21 crc kubenswrapper[4766]: E1213 03:47:21.346234 4766 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podc81d9226_937a_4f5f_a8f0_746425fffb1f.slice/crio-dc18bd716fc4cfdfa28025c37169e1972c97555ae86470900d7b133bd548700d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podc81d9226_937a_4f5f_a8f0_746425fffb1f.slice/crio-conmon-dc18bd716fc4cfdfa28025c37169e1972c97555ae86470900d7b133bd548700d.scope\": RecentStats: unable to find data in memory cache]" Dec 13 03:47:22 crc kubenswrapper[4766]: I1213 03:47:22.350904 4766 patch_prober.go:28] interesting pod/router-default-5444994796-zxh2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 03:47:22 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Dec 13 03:47:22 crc kubenswrapper[4766]: [+]process-running ok Dec 13 03:47:22 crc kubenswrapper[4766]: healthz check failed Dec 13 03:47:22 crc kubenswrapper[4766]: I1213 03:47:22.351385 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zxh2f" podUID="a08f722d-f913-4c66-8b1f-5ad1285884cb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 03:47:22 crc kubenswrapper[4766]: I1213 03:47:22.446098 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qvxrm" event={"ID":"84c9636d-a525-40e8-bc35-af07ecbdeafc","Type":"ContainerStarted","Data":"58316edfa0197407702c61205acd3a48715369d0843f2550da4ac6265e92c017"} Dec 13 03:47:22 crc kubenswrapper[4766]: I1213 03:47:22.912208 4766 generic.go:334] "Generic (PLEG): container finished" podID="c81d9226-937a-4f5f-a8f0-746425fffb1f" containerID="dc18bd716fc4cfdfa28025c37169e1972c97555ae86470900d7b133bd548700d" exitCode=0 Dec 13 03:47:22 crc kubenswrapper[4766]: I1213 03:47:22.913147 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c81d9226-937a-4f5f-a8f0-746425fffb1f","Type":"ContainerDied","Data":"dc18bd716fc4cfdfa28025c37169e1972c97555ae86470900d7b133bd548700d"} Dec 13 03:47:23 crc kubenswrapper[4766]: I1213 03:47:23.341239 4766 patch_prober.go:28] interesting pod/router-default-5444994796-zxh2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 03:47:23 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Dec 13 03:47:23 crc kubenswrapper[4766]: [+]process-running ok Dec 13 03:47:23 crc kubenswrapper[4766]: healthz check failed Dec 13 03:47:23 crc kubenswrapper[4766]: I1213 03:47:23.341313 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zxh2f" podUID="a08f722d-f913-4c66-8b1f-5ad1285884cb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 03:47:24 crc kubenswrapper[4766]: I1213 03:47:24.348040 4766 patch_prober.go:28] interesting pod/router-default-5444994796-zxh2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 03:47:24 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Dec 13 03:47:24 crc kubenswrapper[4766]: [+]process-running ok Dec 13 03:47:24 crc kubenswrapper[4766]: healthz check failed Dec 13 03:47:24 crc kubenswrapper[4766]: I1213 03:47:24.348502 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zxh2f" podUID="a08f722d-f913-4c66-8b1f-5ad1285884cb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 03:47:25 crc kubenswrapper[4766]: I1213 03:47:25.018565 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"bc576874-5930-44fc-9382-95b4d9aee001","Type":"ContainerStarted","Data":"c3346eea43f12d79b7ef3110a3c67b81ce0e3d9212f86b49c4b476963c7f3675"} Dec 13 03:47:25 crc kubenswrapper[4766]: I1213 03:47:25.027449 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qvxrm" event={"ID":"84c9636d-a525-40e8-bc35-af07ecbdeafc","Type":"ContainerStarted","Data":"64ddd2641dbc951c12d07c8d318d84f019729aebccdd242a767a226c445f5549"} Dec 13 03:47:25 crc kubenswrapper[4766]: I1213 03:47:25.127963 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=8.127931406 podStartE2EDuration="8.127931406s" podCreationTimestamp="2025-12-13 03:47:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:25.092361142 +0000 UTC m=+176.602294106" watchObservedRunningTime="2025-12-13 03:47:25.127931406 +0000 UTC m=+176.637864370" Dec 13 03:47:25 crc kubenswrapper[4766]: I1213 03:47:25.336411 4766 patch_prober.go:28] interesting pod/router-default-5444994796-zxh2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 03:47:25 crc kubenswrapper[4766]: [+]has-synced ok Dec 13 03:47:25 crc kubenswrapper[4766]: [+]process-running ok Dec 13 03:47:25 crc kubenswrapper[4766]: healthz check failed Dec 13 03:47:25 crc kubenswrapper[4766]: I1213 03:47:25.336975 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zxh2f" podUID="a08f722d-f913-4c66-8b1f-5ad1285884cb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 03:47:25 crc kubenswrapper[4766]: I1213 03:47:25.791244 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 13 03:47:25 crc kubenswrapper[4766]: I1213 03:47:25.899837 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c81d9226-937a-4f5f-a8f0-746425fffb1f-kube-api-access\") pod \"c81d9226-937a-4f5f-a8f0-746425fffb1f\" (UID: \"c81d9226-937a-4f5f-a8f0-746425fffb1f\") " Dec 13 03:47:25 crc kubenswrapper[4766]: I1213 03:47:25.899995 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c81d9226-937a-4f5f-a8f0-746425fffb1f-kubelet-dir\") pod \"c81d9226-937a-4f5f-a8f0-746425fffb1f\" (UID: \"c81d9226-937a-4f5f-a8f0-746425fffb1f\") " Dec 13 03:47:25 crc kubenswrapper[4766]: I1213 03:47:25.900867 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c81d9226-937a-4f5f-a8f0-746425fffb1f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c81d9226-937a-4f5f-a8f0-746425fffb1f" (UID: "c81d9226-937a-4f5f-a8f0-746425fffb1f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:47:25 crc kubenswrapper[4766]: I1213 03:47:25.954567 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-qvxrm" podStartSLOduration=151.954510934 podStartE2EDuration="2m31.954510934s" podCreationTimestamp="2025-12-13 03:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:47:25.132874511 +0000 UTC m=+176.642807485" watchObservedRunningTime="2025-12-13 03:47:25.954510934 +0000 UTC m=+177.464443918" Dec 13 03:47:26 crc kubenswrapper[4766]: I1213 03:47:26.007612 4766 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c81d9226-937a-4f5f-a8f0-746425fffb1f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 13 03:47:26 crc kubenswrapper[4766]: I1213 03:47:26.061730 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c81d9226-937a-4f5f-a8f0-746425fffb1f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c81d9226-937a-4f5f-a8f0-746425fffb1f" (UID: "c81d9226-937a-4f5f-a8f0-746425fffb1f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:47:26 crc kubenswrapper[4766]: I1213 03:47:26.102571 4766 generic.go:334] "Generic (PLEG): container finished" podID="bc576874-5930-44fc-9382-95b4d9aee001" containerID="c3346eea43f12d79b7ef3110a3c67b81ce0e3d9212f86b49c4b476963c7f3675" exitCode=0 Dec 13 03:47:26 crc kubenswrapper[4766]: I1213 03:47:26.103504 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"bc576874-5930-44fc-9382-95b4d9aee001","Type":"ContainerDied","Data":"c3346eea43f12d79b7ef3110a3c67b81ce0e3d9212f86b49c4b476963c7f3675"} Dec 13 03:47:26 crc kubenswrapper[4766]: I1213 03:47:26.181216 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c81d9226-937a-4f5f-a8f0-746425fffb1f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 13 03:47:26 crc kubenswrapper[4766]: I1213 03:47:26.255536 4766 generic.go:334] "Generic (PLEG): container finished" podID="e283eb17-df64-4565-8f82-a8afac963d4c" containerID="2b2231b927c555af9acbf62a67c41c603dc75cd79cd54cceab6bb34584de7db8" exitCode=0 Dec 13 03:47:26 crc kubenswrapper[4766]: I1213 03:47:26.255689 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29426625-n2qvr" event={"ID":"e283eb17-df64-4565-8f82-a8afac963d4c","Type":"ContainerDied","Data":"2b2231b927c555af9acbf62a67c41c603dc75cd79cd54cceab6bb34584de7db8"} Dec 13 03:47:26 crc kubenswrapper[4766]: I1213 03:47:26.259267 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 13 03:47:26 crc kubenswrapper[4766]: I1213 03:47:26.263089 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c81d9226-937a-4f5f-a8f0-746425fffb1f","Type":"ContainerDied","Data":"a23d09978d735aae8252ad71fac95beb24211b20c5d8643e47f5d3a1e01b691d"} Dec 13 03:47:26 crc kubenswrapper[4766]: I1213 03:47:26.263159 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a23d09978d735aae8252ad71fac95beb24211b20c5d8643e47f5d3a1e01b691d" Dec 13 03:47:26 crc kubenswrapper[4766]: I1213 03:47:26.490570 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-zxh2f" Dec 13 03:47:26 crc kubenswrapper[4766]: I1213 03:47:26.496504 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-zxh2f" Dec 13 03:47:28 crc kubenswrapper[4766]: I1213 03:47:28.584257 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426625-n2qvr" Dec 13 03:47:28 crc kubenswrapper[4766]: I1213 03:47:28.676211 4766 patch_prober.go:28] interesting pod/console-f9d7485db-x546d container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Dec 13 03:47:28 crc kubenswrapper[4766]: I1213 03:47:28.676297 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-x546d" podUID="fc132ae0-d7bd-4064-89ad-4f9a57e76369" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Dec 13 03:47:28 crc kubenswrapper[4766]: I1213 03:47:28.737342 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e283eb17-df64-4565-8f82-a8afac963d4c-secret-volume\") pod \"e283eb17-df64-4565-8f82-a8afac963d4c\" (UID: \"e283eb17-df64-4565-8f82-a8afac963d4c\") " Dec 13 03:47:28 crc kubenswrapper[4766]: I1213 03:47:28.737595 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxthz\" (UniqueName: \"kubernetes.io/projected/e283eb17-df64-4565-8f82-a8afac963d4c-kube-api-access-cxthz\") pod \"e283eb17-df64-4565-8f82-a8afac963d4c\" (UID: \"e283eb17-df64-4565-8f82-a8afac963d4c\") " Dec 13 03:47:28 crc kubenswrapper[4766]: I1213 03:47:28.737864 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e283eb17-df64-4565-8f82-a8afac963d4c-config-volume\") pod \"e283eb17-df64-4565-8f82-a8afac963d4c\" (UID: \"e283eb17-df64-4565-8f82-a8afac963d4c\") " Dec 13 03:47:28 crc kubenswrapper[4766]: I1213 03:47:28.739814 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e283eb17-df64-4565-8f82-a8afac963d4c-config-volume" (OuterVolumeSpecName: "config-volume") pod "e283eb17-df64-4565-8f82-a8afac963d4c" (UID: "e283eb17-df64-4565-8f82-a8afac963d4c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:47:28 crc kubenswrapper[4766]: I1213 03:47:28.787914 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e283eb17-df64-4565-8f82-a8afac963d4c-kube-api-access-cxthz" (OuterVolumeSpecName: "kube-api-access-cxthz") pod "e283eb17-df64-4565-8f82-a8afac963d4c" (UID: "e283eb17-df64-4565-8f82-a8afac963d4c"). InnerVolumeSpecName "kube-api-access-cxthz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:47:28 crc kubenswrapper[4766]: I1213 03:47:28.792686 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e283eb17-df64-4565-8f82-a8afac963d4c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e283eb17-df64-4565-8f82-a8afac963d4c" (UID: "e283eb17-df64-4565-8f82-a8afac963d4c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:47:28 crc kubenswrapper[4766]: I1213 03:47:28.840814 4766 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e283eb17-df64-4565-8f82-a8afac963d4c-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 13 03:47:28 crc kubenswrapper[4766]: I1213 03:47:28.840854 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxthz\" (UniqueName: \"kubernetes.io/projected/e283eb17-df64-4565-8f82-a8afac963d4c-kube-api-access-cxthz\") on node \"crc\" DevicePath \"\"" Dec 13 03:47:28 crc kubenswrapper[4766]: I1213 03:47:28.840865 4766 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e283eb17-df64-4565-8f82-a8afac963d4c-config-volume\") on node \"crc\" DevicePath \"\"" Dec 13 03:47:28 crc kubenswrapper[4766]: I1213 03:47:28.869141 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 13 03:47:29 crc kubenswrapper[4766]: I1213 03:47:29.045102 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc576874-5930-44fc-9382-95b4d9aee001-kube-api-access\") pod \"bc576874-5930-44fc-9382-95b4d9aee001\" (UID: \"bc576874-5930-44fc-9382-95b4d9aee001\") " Dec 13 03:47:29 crc kubenswrapper[4766]: I1213 03:47:29.045708 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc576874-5930-44fc-9382-95b4d9aee001-kubelet-dir\") pod \"bc576874-5930-44fc-9382-95b4d9aee001\" (UID: \"bc576874-5930-44fc-9382-95b4d9aee001\") " Dec 13 03:47:29 crc kubenswrapper[4766]: I1213 03:47:29.046116 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc576874-5930-44fc-9382-95b4d9aee001-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bc576874-5930-44fc-9382-95b4d9aee001" (UID: "bc576874-5930-44fc-9382-95b4d9aee001"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:47:29 crc kubenswrapper[4766]: I1213 03:47:29.062196 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc576874-5930-44fc-9382-95b4d9aee001-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bc576874-5930-44fc-9382-95b4d9aee001" (UID: "bc576874-5930-44fc-9382-95b4d9aee001"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:47:29 crc kubenswrapper[4766]: I1213 03:47:29.094090 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-tjszx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 13 03:47:29 crc kubenswrapper[4766]: I1213 03:47:29.094149 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-tjszx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 13 03:47:29 crc kubenswrapper[4766]: I1213 03:47:29.094290 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tjszx" podUID="41447db8-0fe6-4772-8bbb-12a68ba33f1e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 13 03:47:29 crc kubenswrapper[4766]: I1213 03:47:29.096392 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tjszx" podUID="41447db8-0fe6-4772-8bbb-12a68ba33f1e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 13 03:47:29 crc kubenswrapper[4766]: I1213 03:47:29.096555 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-tjszx" Dec 13 03:47:29 crc kubenswrapper[4766]: I1213 03:47:29.099228 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-tjszx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 13 03:47:29 crc kubenswrapper[4766]: I1213 03:47:29.099302 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tjszx" podUID="41447db8-0fe6-4772-8bbb-12a68ba33f1e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 13 03:47:29 crc kubenswrapper[4766]: I1213 03:47:29.100400 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"319772b744c1f6f757f29417745f8e699c4183738fa94514e30bd1beac043b2c"} pod="openshift-console/downloads-7954f5f757-tjszx" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 13 03:47:29 crc kubenswrapper[4766]: I1213 03:47:29.100589 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-tjszx" podUID="41447db8-0fe6-4772-8bbb-12a68ba33f1e" containerName="download-server" containerID="cri-o://319772b744c1f6f757f29417745f8e699c4183738fa94514e30bd1beac043b2c" gracePeriod=2 Dec 13 03:47:29 crc kubenswrapper[4766]: I1213 03:47:29.147866 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc576874-5930-44fc-9382-95b4d9aee001-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 13 03:47:29 crc kubenswrapper[4766]: I1213 03:47:29.147919 4766 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc576874-5930-44fc-9382-95b4d9aee001-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 13 03:47:29 crc kubenswrapper[4766]: I1213 03:47:29.291938 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"bc576874-5930-44fc-9382-95b4d9aee001","Type":"ContainerDied","Data":"3e9f8cfb5057f6e552ed6dac1fdb02e172a02c5afd855917089f36dfc0c70f4b"} Dec 13 03:47:29 crc kubenswrapper[4766]: I1213 03:47:29.291986 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e9f8cfb5057f6e552ed6dac1fdb02e172a02c5afd855917089f36dfc0c70f4b" Dec 13 03:47:29 crc kubenswrapper[4766]: I1213 03:47:29.292338 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 13 03:47:29 crc kubenswrapper[4766]: I1213 03:47:29.296994 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29426625-n2qvr" event={"ID":"e283eb17-df64-4565-8f82-a8afac963d4c","Type":"ContainerDied","Data":"b45357c837bd4b90d52df1299ea263310b9fb6aa66dd795d0f37b252d178e1e0"} Dec 13 03:47:29 crc kubenswrapper[4766]: I1213 03:47:29.297060 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b45357c837bd4b90d52df1299ea263310b9fb6aa66dd795d0f37b252d178e1e0" Dec 13 03:47:29 crc kubenswrapper[4766]: I1213 03:47:29.297125 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426625-n2qvr" Dec 13 03:47:30 crc kubenswrapper[4766]: I1213 03:47:30.398260 4766 generic.go:334] "Generic (PLEG): container finished" podID="41447db8-0fe6-4772-8bbb-12a68ba33f1e" containerID="319772b744c1f6f757f29417745f8e699c4183738fa94514e30bd1beac043b2c" exitCode=0 Dec 13 03:47:30 crc kubenswrapper[4766]: I1213 03:47:30.398338 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tjszx" event={"ID":"41447db8-0fe6-4772-8bbb-12a68ba33f1e","Type":"ContainerDied","Data":"319772b744c1f6f757f29417745f8e699c4183738fa94514e30bd1beac043b2c"} Dec 13 03:47:32 crc kubenswrapper[4766]: I1213 03:47:32.606936 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tjszx" event={"ID":"41447db8-0fe6-4772-8bbb-12a68ba33f1e","Type":"ContainerStarted","Data":"c4595c271d50ed41197ce53ef5dd02d54826be1e150e370d03168e157faea27f"} Dec 13 03:47:33 crc kubenswrapper[4766]: I1213 03:47:33.748090 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-tjszx" Dec 13 03:47:33 crc kubenswrapper[4766]: I1213 03:47:33.748301 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-tjszx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 13 03:47:33 crc kubenswrapper[4766]: I1213 03:47:33.748369 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tjszx" podUID="41447db8-0fe6-4772-8bbb-12a68ba33f1e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 13 03:47:34 crc kubenswrapper[4766]: I1213 03:47:34.907570 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-tjszx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 13 03:47:34 crc kubenswrapper[4766]: I1213 03:47:34.908089 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tjszx" podUID="41447db8-0fe6-4772-8bbb-12a68ba33f1e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 13 03:47:35 crc kubenswrapper[4766]: I1213 03:47:35.877219 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:47:38 crc kubenswrapper[4766]: I1213 03:47:38.910366 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-x546d" Dec 13 03:47:38 crc kubenswrapper[4766]: I1213 03:47:38.919679 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-x546d" Dec 13 03:47:39 crc kubenswrapper[4766]: I1213 03:47:39.077016 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-tjszx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 13 03:47:39 crc kubenswrapper[4766]: I1213 03:47:39.077732 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tjszx" podUID="41447db8-0fe6-4772-8bbb-12a68ba33f1e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 13 03:47:39 crc kubenswrapper[4766]: I1213 03:47:39.078139 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-tjszx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 13 03:47:39 crc kubenswrapper[4766]: I1213 03:47:39.078183 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tjszx" podUID="41447db8-0fe6-4772-8bbb-12a68ba33f1e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 13 03:47:39 crc kubenswrapper[4766]: I1213 03:47:39.492585 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rw6wb" Dec 13 03:47:39 crc kubenswrapper[4766]: I1213 03:47:39.990052 4766 patch_prober.go:28] interesting pod/machine-config-daemon-94w9l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 03:47:39 crc kubenswrapper[4766]: I1213 03:47:39.990149 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 03:47:45 crc kubenswrapper[4766]: I1213 03:47:45.947219 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 03:47:46 crc kubenswrapper[4766]: I1213 03:47:46.552784 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nzfkv"] Dec 13 03:47:46 crc kubenswrapper[4766]: I1213 03:47:46.553196 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-nzfkv" podUID="59e373cc-61e8-4c1a-9734-6a2120179e36" containerName="controller-manager" containerID="cri-o://b1b56e0051aa83f23b35c8716adead635b1acc5e51b5e3d68a29f72c2d6789e5" gracePeriod=30 Dec 13 03:47:46 crc kubenswrapper[4766]: I1213 03:47:46.657372 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69"] Dec 13 03:47:46 crc kubenswrapper[4766]: I1213 03:47:46.657695 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69" podUID="a26b74bc-41ac-4d65-b833-cb269d199ddd" containerName="route-controller-manager" containerID="cri-o://c9fa0a8e90b61fa4b95214c45e56ba393c66a8d86ad42a513322d13e562b3e5b" gracePeriod=30 Dec 13 03:47:47 crc kubenswrapper[4766]: I1213 03:47:47.380793 4766 generic.go:334] "Generic (PLEG): container finished" podID="59e373cc-61e8-4c1a-9734-6a2120179e36" containerID="b1b56e0051aa83f23b35c8716adead635b1acc5e51b5e3d68a29f72c2d6789e5" exitCode=0 Dec 13 03:47:47 crc kubenswrapper[4766]: I1213 03:47:47.381295 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nzfkv" event={"ID":"59e373cc-61e8-4c1a-9734-6a2120179e36","Type":"ContainerDied","Data":"b1b56e0051aa83f23b35c8716adead635b1acc5e51b5e3d68a29f72c2d6789e5"} Dec 13 03:47:48 crc kubenswrapper[4766]: I1213 03:47:48.787172 4766 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-kxg69 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Dec 13 03:47:48 crc kubenswrapper[4766]: I1213 03:47:48.787276 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69" podUID="a26b74bc-41ac-4d65-b833-cb269d199ddd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Dec 13 03:47:48 crc kubenswrapper[4766]: I1213 03:47:48.807070 4766 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-nzfkv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Dec 13 03:47:48 crc kubenswrapper[4766]: I1213 03:47:48.807142 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-nzfkv" podUID="59e373cc-61e8-4c1a-9734-6a2120179e36" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Dec 13 03:47:49 crc kubenswrapper[4766]: I1213 03:47:49.075362 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-tjszx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 13 03:47:49 crc kubenswrapper[4766]: I1213 03:47:49.075502 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tjszx" podUID="41447db8-0fe6-4772-8bbb-12a68ba33f1e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 13 03:47:49 crc kubenswrapper[4766]: I1213 03:47:49.075419 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-tjszx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 13 03:47:49 crc kubenswrapper[4766]: I1213 03:47:49.075639 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tjszx" podUID="41447db8-0fe6-4772-8bbb-12a68ba33f1e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 13 03:47:56 crc kubenswrapper[4766]: I1213 03:47:56.047146 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 13 03:47:56 crc kubenswrapper[4766]: E1213 03:47:56.048224 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc576874-5930-44fc-9382-95b4d9aee001" containerName="pruner" Dec 13 03:47:56 crc kubenswrapper[4766]: I1213 03:47:56.048257 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc576874-5930-44fc-9382-95b4d9aee001" containerName="pruner" Dec 13 03:47:56 crc kubenswrapper[4766]: E1213 03:47:56.048280 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e283eb17-df64-4565-8f82-a8afac963d4c" containerName="collect-profiles" Dec 13 03:47:56 crc kubenswrapper[4766]: I1213 03:47:56.048289 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e283eb17-df64-4565-8f82-a8afac963d4c" containerName="collect-profiles" Dec 13 03:47:56 crc kubenswrapper[4766]: E1213 03:47:56.048302 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c81d9226-937a-4f5f-a8f0-746425fffb1f" containerName="pruner" Dec 13 03:47:56 crc kubenswrapper[4766]: I1213 03:47:56.048312 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c81d9226-937a-4f5f-a8f0-746425fffb1f" containerName="pruner" Dec 13 03:47:56 crc kubenswrapper[4766]: I1213 03:47:56.048499 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc576874-5930-44fc-9382-95b4d9aee001" containerName="pruner" Dec 13 03:47:56 crc kubenswrapper[4766]: I1213 03:47:56.048524 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e283eb17-df64-4565-8f82-a8afac963d4c" containerName="collect-profiles" Dec 13 03:47:56 crc kubenswrapper[4766]: I1213 03:47:56.048534 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c81d9226-937a-4f5f-a8f0-746425fffb1f" containerName="pruner" Dec 13 03:47:56 crc kubenswrapper[4766]: I1213 03:47:56.049195 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 13 03:47:56 crc kubenswrapper[4766]: I1213 03:47:56.052138 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 13 03:47:56 crc kubenswrapper[4766]: I1213 03:47:56.052236 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 13 03:47:56 crc kubenswrapper[4766]: I1213 03:47:56.060592 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 13 03:47:56 crc kubenswrapper[4766]: I1213 03:47:56.148976 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f27eace-c54f-4937-876b-5d6d71b53682-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3f27eace-c54f-4937-876b-5d6d71b53682\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 13 03:47:56 crc kubenswrapper[4766]: I1213 03:47:56.149340 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f27eace-c54f-4937-876b-5d6d71b53682-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3f27eace-c54f-4937-876b-5d6d71b53682\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 13 03:47:56 crc kubenswrapper[4766]: I1213 03:47:56.250193 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f27eace-c54f-4937-876b-5d6d71b53682-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3f27eace-c54f-4937-876b-5d6d71b53682\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 13 03:47:56 crc kubenswrapper[4766]: I1213 03:47:56.250328 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f27eace-c54f-4937-876b-5d6d71b53682-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3f27eace-c54f-4937-876b-5d6d71b53682\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 13 03:47:56 crc kubenswrapper[4766]: I1213 03:47:56.250411 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f27eace-c54f-4937-876b-5d6d71b53682-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3f27eace-c54f-4937-876b-5d6d71b53682\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 13 03:47:56 crc kubenswrapper[4766]: I1213 03:47:56.284495 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f27eace-c54f-4937-876b-5d6d71b53682-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3f27eace-c54f-4937-876b-5d6d71b53682\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 13 03:47:56 crc kubenswrapper[4766]: I1213 03:47:56.400883 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 13 03:47:58 crc kubenswrapper[4766]: I1213 03:47:58.787667 4766 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-kxg69 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Dec 13 03:47:58 crc kubenswrapper[4766]: I1213 03:47:58.788216 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69" podUID="a26b74bc-41ac-4d65-b833-cb269d199ddd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Dec 13 03:47:58 crc kubenswrapper[4766]: I1213 03:47:58.805490 4766 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-nzfkv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Dec 13 03:47:58 crc kubenswrapper[4766]: I1213 03:47:58.805566 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-nzfkv" podUID="59e373cc-61e8-4c1a-9734-6a2120179e36" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Dec 13 03:47:59 crc kubenswrapper[4766]: I1213 03:47:59.084479 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-tjszx" Dec 13 03:48:01 crc kubenswrapper[4766]: I1213 03:48:01.469055 4766 generic.go:334] "Generic (PLEG): container finished" podID="a26b74bc-41ac-4d65-b833-cb269d199ddd" containerID="c9fa0a8e90b61fa4b95214c45e56ba393c66a8d86ad42a513322d13e562b3e5b" exitCode=0 Dec 13 03:48:01 crc kubenswrapper[4766]: I1213 03:48:01.469368 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69" event={"ID":"a26b74bc-41ac-4d65-b833-cb269d199ddd","Type":"ContainerDied","Data":"c9fa0a8e90b61fa4b95214c45e56ba393c66a8d86ad42a513322d13e562b3e5b"} Dec 13 03:48:01 crc kubenswrapper[4766]: I1213 03:48:01.654031 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 13 03:48:01 crc kubenswrapper[4766]: I1213 03:48:01.655090 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 13 03:48:01 crc kubenswrapper[4766]: I1213 03:48:01.657641 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 13 03:48:01 crc kubenswrapper[4766]: I1213 03:48:01.838342 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/151a6cc7-14cd-4ac6-a65e-d165c2e8520f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"151a6cc7-14cd-4ac6-a65e-d165c2e8520f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 13 03:48:01 crc kubenswrapper[4766]: I1213 03:48:01.838422 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/151a6cc7-14cd-4ac6-a65e-d165c2e8520f-var-lock\") pod \"installer-9-crc\" (UID: \"151a6cc7-14cd-4ac6-a65e-d165c2e8520f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 13 03:48:01 crc kubenswrapper[4766]: I1213 03:48:01.838515 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/151a6cc7-14cd-4ac6-a65e-d165c2e8520f-kube-api-access\") pod \"installer-9-crc\" (UID: \"151a6cc7-14cd-4ac6-a65e-d165c2e8520f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 13 03:48:01 crc kubenswrapper[4766]: I1213 03:48:01.948316 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/151a6cc7-14cd-4ac6-a65e-d165c2e8520f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"151a6cc7-14cd-4ac6-a65e-d165c2e8520f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 13 03:48:01 crc kubenswrapper[4766]: I1213 03:48:01.948494 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/151a6cc7-14cd-4ac6-a65e-d165c2e8520f-var-lock\") pod \"installer-9-crc\" (UID: \"151a6cc7-14cd-4ac6-a65e-d165c2e8520f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 13 03:48:01 crc kubenswrapper[4766]: I1213 03:48:01.948709 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/151a6cc7-14cd-4ac6-a65e-d165c2e8520f-kube-api-access\") pod \"installer-9-crc\" (UID: \"151a6cc7-14cd-4ac6-a65e-d165c2e8520f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 13 03:48:01 crc kubenswrapper[4766]: I1213 03:48:01.949588 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/151a6cc7-14cd-4ac6-a65e-d165c2e8520f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"151a6cc7-14cd-4ac6-a65e-d165c2e8520f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 13 03:48:01 crc kubenswrapper[4766]: I1213 03:48:01.949621 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/151a6cc7-14cd-4ac6-a65e-d165c2e8520f-var-lock\") pod \"installer-9-crc\" (UID: \"151a6cc7-14cd-4ac6-a65e-d165c2e8520f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 13 03:48:01 crc kubenswrapper[4766]: I1213 03:48:01.971035 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/151a6cc7-14cd-4ac6-a65e-d165c2e8520f-kube-api-access\") pod \"installer-9-crc\" (UID: \"151a6cc7-14cd-4ac6-a65e-d165c2e8520f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 13 03:48:01 crc kubenswrapper[4766]: I1213 03:48:01.981001 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 13 03:48:08 crc kubenswrapper[4766]: I1213 03:48:08.788621 4766 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-kxg69 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Dec 13 03:48:08 crc kubenswrapper[4766]: I1213 03:48:08.791562 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69" podUID="a26b74bc-41ac-4d65-b833-cb269d199ddd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Dec 13 03:48:09 crc kubenswrapper[4766]: I1213 03:48:09.732981 4766 patch_prober.go:28] interesting pod/machine-config-daemon-94w9l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 03:48:09 crc kubenswrapper[4766]: I1213 03:48:09.733072 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 03:48:09 crc kubenswrapper[4766]: I1213 03:48:09.733143 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" Dec 13 03:48:09 crc kubenswrapper[4766]: I1213 03:48:09.734011 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420"} pod="openshift-machine-config-operator/machine-config-daemon-94w9l" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 13 03:48:09 crc kubenswrapper[4766]: I1213 03:48:09.734080 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" containerID="cri-o://e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420" gracePeriod=600 Dec 13 03:48:09 crc kubenswrapper[4766]: I1213 03:48:09.805294 4766 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-nzfkv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 03:48:09 crc kubenswrapper[4766]: I1213 03:48:09.805454 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-nzfkv" podUID="59e373cc-61e8-4c1a-9734-6a2120179e36" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 03:48:18 crc kubenswrapper[4766]: I1213 03:48:18.788357 4766 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-kxg69 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Dec 13 03:48:18 crc kubenswrapper[4766]: I1213 03:48:18.788904 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69" podUID="a26b74bc-41ac-4d65-b833-cb269d199ddd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Dec 13 03:48:19 crc kubenswrapper[4766]: I1213 03:48:19.806323 4766 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-nzfkv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 03:48:19 crc kubenswrapper[4766]: I1213 03:48:19.806504 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-nzfkv" podUID="59e373cc-61e8-4c1a-9734-6a2120179e36" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.530390 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nzfkv" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.562637 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8d8bb5ffc-zk5q7"] Dec 13 03:48:22 crc kubenswrapper[4766]: E1213 03:48:22.563005 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e373cc-61e8-4c1a-9734-6a2120179e36" containerName="controller-manager" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.563023 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e373cc-61e8-4c1a-9734-6a2120179e36" containerName="controller-manager" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.563131 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e373cc-61e8-4c1a-9734-6a2120179e36" containerName="controller-manager" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.563630 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8d8bb5ffc-zk5q7" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.601411 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8d8bb5ffc-zk5q7"] Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.651960 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nzfkv" event={"ID":"59e373cc-61e8-4c1a-9734-6a2120179e36","Type":"ContainerDied","Data":"c6691347b992c54d70de1cd82d27df2aa9b40b80c5508da047c6788bd8664e65"} Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.652043 4766 scope.go:117] "RemoveContainer" containerID="b1b56e0051aa83f23b35c8716adead635b1acc5e51b5e3d68a29f72c2d6789e5" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.652189 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nzfkv" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.676045 4766 generic.go:334] "Generic (PLEG): container finished" podID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerID="e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420" exitCode=0 Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.676118 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" event={"ID":"71e6a48b-4f5d-4299-9c7b-98dbe11e670e","Type":"ContainerDied","Data":"e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420"} Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.727837 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6hj5\" (UniqueName: \"kubernetes.io/projected/59e373cc-61e8-4c1a-9734-6a2120179e36-kube-api-access-w6hj5\") pod \"59e373cc-61e8-4c1a-9734-6a2120179e36\" (UID: \"59e373cc-61e8-4c1a-9734-6a2120179e36\") " Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.727899 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59e373cc-61e8-4c1a-9734-6a2120179e36-proxy-ca-bundles\") pod \"59e373cc-61e8-4c1a-9734-6a2120179e36\" (UID: \"59e373cc-61e8-4c1a-9734-6a2120179e36\") " Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.727927 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59e373cc-61e8-4c1a-9734-6a2120179e36-client-ca\") pod \"59e373cc-61e8-4c1a-9734-6a2120179e36\" (UID: \"59e373cc-61e8-4c1a-9734-6a2120179e36\") " Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.727963 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59e373cc-61e8-4c1a-9734-6a2120179e36-serving-cert\") pod \"59e373cc-61e8-4c1a-9734-6a2120179e36\" (UID: \"59e373cc-61e8-4c1a-9734-6a2120179e36\") " Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.728070 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59e373cc-61e8-4c1a-9734-6a2120179e36-config\") pod \"59e373cc-61e8-4c1a-9734-6a2120179e36\" (UID: \"59e373cc-61e8-4c1a-9734-6a2120179e36\") " Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.728238 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce643eda-a1c4-46f1-b16e-d5aff12de5c8-client-ca\") pod \"controller-manager-8d8bb5ffc-zk5q7\" (UID: \"ce643eda-a1c4-46f1-b16e-d5aff12de5c8\") " pod="openshift-controller-manager/controller-manager-8d8bb5ffc-zk5q7" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.728308 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce643eda-a1c4-46f1-b16e-d5aff12de5c8-serving-cert\") pod \"controller-manager-8d8bb5ffc-zk5q7\" (UID: \"ce643eda-a1c4-46f1-b16e-d5aff12de5c8\") " pod="openshift-controller-manager/controller-manager-8d8bb5ffc-zk5q7" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.728450 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ce643eda-a1c4-46f1-b16e-d5aff12de5c8-proxy-ca-bundles\") pod \"controller-manager-8d8bb5ffc-zk5q7\" (UID: \"ce643eda-a1c4-46f1-b16e-d5aff12de5c8\") " pod="openshift-controller-manager/controller-manager-8d8bb5ffc-zk5q7" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.728488 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdb4r\" (UniqueName: \"kubernetes.io/projected/ce643eda-a1c4-46f1-b16e-d5aff12de5c8-kube-api-access-pdb4r\") pod \"controller-manager-8d8bb5ffc-zk5q7\" (UID: \"ce643eda-a1c4-46f1-b16e-d5aff12de5c8\") " pod="openshift-controller-manager/controller-manager-8d8bb5ffc-zk5q7" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.728519 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce643eda-a1c4-46f1-b16e-d5aff12de5c8-config\") pod \"controller-manager-8d8bb5ffc-zk5q7\" (UID: \"ce643eda-a1c4-46f1-b16e-d5aff12de5c8\") " pod="openshift-controller-manager/controller-manager-8d8bb5ffc-zk5q7" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.729395 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59e373cc-61e8-4c1a-9734-6a2120179e36-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "59e373cc-61e8-4c1a-9734-6a2120179e36" (UID: "59e373cc-61e8-4c1a-9734-6a2120179e36"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.729387 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59e373cc-61e8-4c1a-9734-6a2120179e36-client-ca" (OuterVolumeSpecName: "client-ca") pod "59e373cc-61e8-4c1a-9734-6a2120179e36" (UID: "59e373cc-61e8-4c1a-9734-6a2120179e36"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.730339 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59e373cc-61e8-4c1a-9734-6a2120179e36-config" (OuterVolumeSpecName: "config") pod "59e373cc-61e8-4c1a-9734-6a2120179e36" (UID: "59e373cc-61e8-4c1a-9734-6a2120179e36"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.753883 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59e373cc-61e8-4c1a-9734-6a2120179e36-kube-api-access-w6hj5" (OuterVolumeSpecName: "kube-api-access-w6hj5") pod "59e373cc-61e8-4c1a-9734-6a2120179e36" (UID: "59e373cc-61e8-4c1a-9734-6a2120179e36"). InnerVolumeSpecName "kube-api-access-w6hj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.754278 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59e373cc-61e8-4c1a-9734-6a2120179e36-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "59e373cc-61e8-4c1a-9734-6a2120179e36" (UID: "59e373cc-61e8-4c1a-9734-6a2120179e36"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.845727 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce643eda-a1c4-46f1-b16e-d5aff12de5c8-client-ca\") pod \"controller-manager-8d8bb5ffc-zk5q7\" (UID: \"ce643eda-a1c4-46f1-b16e-d5aff12de5c8\") " pod="openshift-controller-manager/controller-manager-8d8bb5ffc-zk5q7" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.845825 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce643eda-a1c4-46f1-b16e-d5aff12de5c8-serving-cert\") pod \"controller-manager-8d8bb5ffc-zk5q7\" (UID: \"ce643eda-a1c4-46f1-b16e-d5aff12de5c8\") " pod="openshift-controller-manager/controller-manager-8d8bb5ffc-zk5q7" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.845889 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ce643eda-a1c4-46f1-b16e-d5aff12de5c8-proxy-ca-bundles\") pod \"controller-manager-8d8bb5ffc-zk5q7\" (UID: \"ce643eda-a1c4-46f1-b16e-d5aff12de5c8\") " pod="openshift-controller-manager/controller-manager-8d8bb5ffc-zk5q7" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.845922 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdb4r\" (UniqueName: \"kubernetes.io/projected/ce643eda-a1c4-46f1-b16e-d5aff12de5c8-kube-api-access-pdb4r\") pod \"controller-manager-8d8bb5ffc-zk5q7\" (UID: \"ce643eda-a1c4-46f1-b16e-d5aff12de5c8\") " pod="openshift-controller-manager/controller-manager-8d8bb5ffc-zk5q7" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.845957 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce643eda-a1c4-46f1-b16e-d5aff12de5c8-config\") pod \"controller-manager-8d8bb5ffc-zk5q7\" (UID: \"ce643eda-a1c4-46f1-b16e-d5aff12de5c8\") " pod="openshift-controller-manager/controller-manager-8d8bb5ffc-zk5q7" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.846040 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59e373cc-61e8-4c1a-9734-6a2120179e36-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.846059 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6hj5\" (UniqueName: \"kubernetes.io/projected/59e373cc-61e8-4c1a-9734-6a2120179e36-kube-api-access-w6hj5\") on node \"crc\" DevicePath \"\"" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.846073 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59e373cc-61e8-4c1a-9734-6a2120179e36-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.846088 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59e373cc-61e8-4c1a-9734-6a2120179e36-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.846099 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59e373cc-61e8-4c1a-9734-6a2120179e36-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.847477 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce643eda-a1c4-46f1-b16e-d5aff12de5c8-client-ca\") pod \"controller-manager-8d8bb5ffc-zk5q7\" (UID: \"ce643eda-a1c4-46f1-b16e-d5aff12de5c8\") " pod="openshift-controller-manager/controller-manager-8d8bb5ffc-zk5q7" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.847955 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ce643eda-a1c4-46f1-b16e-d5aff12de5c8-proxy-ca-bundles\") pod \"controller-manager-8d8bb5ffc-zk5q7\" (UID: \"ce643eda-a1c4-46f1-b16e-d5aff12de5c8\") " pod="openshift-controller-manager/controller-manager-8d8bb5ffc-zk5q7" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.848756 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce643eda-a1c4-46f1-b16e-d5aff12de5c8-config\") pod \"controller-manager-8d8bb5ffc-zk5q7\" (UID: \"ce643eda-a1c4-46f1-b16e-d5aff12de5c8\") " pod="openshift-controller-manager/controller-manager-8d8bb5ffc-zk5q7" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.853384 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce643eda-a1c4-46f1-b16e-d5aff12de5c8-serving-cert\") pod \"controller-manager-8d8bb5ffc-zk5q7\" (UID: \"ce643eda-a1c4-46f1-b16e-d5aff12de5c8\") " pod="openshift-controller-manager/controller-manager-8d8bb5ffc-zk5q7" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.872263 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdb4r\" (UniqueName: \"kubernetes.io/projected/ce643eda-a1c4-46f1-b16e-d5aff12de5c8-kube-api-access-pdb4r\") pod \"controller-manager-8d8bb5ffc-zk5q7\" (UID: \"ce643eda-a1c4-46f1-b16e-d5aff12de5c8\") " pod="openshift-controller-manager/controller-manager-8d8bb5ffc-zk5q7" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.907532 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8d8bb5ffc-zk5q7" Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.985655 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nzfkv"] Dec 13 03:48:22 crc kubenswrapper[4766]: I1213 03:48:22.988883 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nzfkv"] Dec 13 03:48:23 crc kubenswrapper[4766]: I1213 03:48:23.627924 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59e373cc-61e8-4c1a-9734-6a2120179e36" path="/var/lib/kubelet/pods/59e373cc-61e8-4c1a-9734-6a2120179e36/volumes" Dec 13 03:48:28 crc kubenswrapper[4766]: E1213 03:48:28.952531 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 13 03:48:28 crc kubenswrapper[4766]: E1213 03:48:28.953311 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f8q56,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-8d4vh_openshift-marketplace(b65de837-4baa-4aff-98cb-2babbdfdb2f5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 13 03:48:28 crc kubenswrapper[4766]: E1213 03:48:28.954569 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-8d4vh" podUID="b65de837-4baa-4aff-98cb-2babbdfdb2f5" Dec 13 03:48:28 crc kubenswrapper[4766]: E1213 03:48:28.972220 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 13 03:48:28 crc kubenswrapper[4766]: E1213 03:48:28.972495 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hks6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-kq9t5_openshift-marketplace(116c9e00-53c9-4d49-813e-2f0dd2b24411): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 13 03:48:28 crc kubenswrapper[4766]: E1213 03:48:28.973715 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-kq9t5" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" Dec 13 03:48:29 crc kubenswrapper[4766]: I1213 03:48:29.786800 4766 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-kxg69 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 03:48:29 crc kubenswrapper[4766]: I1213 03:48:29.787266 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69" podUID="a26b74bc-41ac-4d65-b833-cb269d199ddd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 03:48:30 crc kubenswrapper[4766]: E1213 03:48:30.065550 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-kq9t5" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" Dec 13 03:48:30 crc kubenswrapper[4766]: E1213 03:48:30.065657 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8d4vh" podUID="b65de837-4baa-4aff-98cb-2babbdfdb2f5" Dec 13 03:48:30 crc kubenswrapper[4766]: E1213 03:48:30.130325 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 13 03:48:30 crc kubenswrapper[4766]: E1213 03:48:30.130567 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mfnnp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-d95nn_openshift-marketplace(9c50096c-0297-4e96-868c-d34cfc326d46): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 13 03:48:30 crc kubenswrapper[4766]: E1213 03:48:30.132554 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-d95nn" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" Dec 13 03:48:30 crc kubenswrapper[4766]: E1213 03:48:30.155743 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 13 03:48:30 crc kubenswrapper[4766]: E1213 03:48:30.155979 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n2c6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fsl6d_openshift-marketplace(cf9aebbf-5f0a-4684-b6fc-a85c909dbb34): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 13 03:48:30 crc kubenswrapper[4766]: E1213 03:48:30.157229 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-fsl6d" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" Dec 13 03:48:31 crc kubenswrapper[4766]: E1213 03:48:31.943056 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-d95nn" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" Dec 13 03:48:31 crc kubenswrapper[4766]: E1213 03:48:31.943064 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fsl6d" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" Dec 13 03:48:32 crc kubenswrapper[4766]: E1213 03:48:32.020626 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 13 03:48:32 crc kubenswrapper[4766]: E1213 03:48:32.021083 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kstrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-hzmqb_openshift-marketplace(35afc352-db00-48b9-b888-6b8b9bc36403): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 13 03:48:32 crc kubenswrapper[4766]: E1213 03:48:32.022455 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-hzmqb" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" Dec 13 03:48:32 crc kubenswrapper[4766]: E1213 03:48:32.029129 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 13 03:48:32 crc kubenswrapper[4766]: E1213 03:48:32.029309 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zksbl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-p7x6l_openshift-marketplace(456b5fc9-3dab-40fc-81c5-ab9ea1f110dd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 13 03:48:32 crc kubenswrapper[4766]: E1213 03:48:32.030509 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-p7x6l" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" Dec 13 03:48:34 crc kubenswrapper[4766]: E1213 03:48:34.188220 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-hzmqb" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" Dec 13 03:48:34 crc kubenswrapper[4766]: E1213 03:48:34.189228 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-p7x6l" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.326757 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69" Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.377771 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66ddb98bcf-wkhw6"] Dec 13 03:48:34 crc kubenswrapper[4766]: E1213 03:48:34.378370 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a26b74bc-41ac-4d65-b833-cb269d199ddd" containerName="route-controller-manager" Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.378398 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a26b74bc-41ac-4d65-b833-cb269d199ddd" containerName="route-controller-manager" Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.378612 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="a26b74bc-41ac-4d65-b833-cb269d199ddd" containerName="route-controller-manager" Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.379285 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66ddb98bcf-wkhw6" Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.382406 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66ddb98bcf-wkhw6"] Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.389199 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a26b74bc-41ac-4d65-b833-cb269d199ddd-serving-cert\") pod \"a26b74bc-41ac-4d65-b833-cb269d199ddd\" (UID: \"a26b74bc-41ac-4d65-b833-cb269d199ddd\") " Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.389280 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a26b74bc-41ac-4d65-b833-cb269d199ddd-client-ca\") pod \"a26b74bc-41ac-4d65-b833-cb269d199ddd\" (UID: \"a26b74bc-41ac-4d65-b833-cb269d199ddd\") " Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.389335 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a26b74bc-41ac-4d65-b833-cb269d199ddd-config\") pod \"a26b74bc-41ac-4d65-b833-cb269d199ddd\" (UID: \"a26b74bc-41ac-4d65-b833-cb269d199ddd\") " Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.389384 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms5qw\" (UniqueName: \"kubernetes.io/projected/a26b74bc-41ac-4d65-b833-cb269d199ddd-kube-api-access-ms5qw\") pod \"a26b74bc-41ac-4d65-b833-cb269d199ddd\" (UID: \"a26b74bc-41ac-4d65-b833-cb269d199ddd\") " Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.389573 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12e81bc4-0ca7-4b47-a35a-91c6e71fd42c-config\") pod \"route-controller-manager-66ddb98bcf-wkhw6\" (UID: \"12e81bc4-0ca7-4b47-a35a-91c6e71fd42c\") " pod="openshift-route-controller-manager/route-controller-manager-66ddb98bcf-wkhw6" Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.389606 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12e81bc4-0ca7-4b47-a35a-91c6e71fd42c-serving-cert\") pod \"route-controller-manager-66ddb98bcf-wkhw6\" (UID: \"12e81bc4-0ca7-4b47-a35a-91c6e71fd42c\") " pod="openshift-route-controller-manager/route-controller-manager-66ddb98bcf-wkhw6" Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.389692 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12e81bc4-0ca7-4b47-a35a-91c6e71fd42c-client-ca\") pod \"route-controller-manager-66ddb98bcf-wkhw6\" (UID: \"12e81bc4-0ca7-4b47-a35a-91c6e71fd42c\") " pod="openshift-route-controller-manager/route-controller-manager-66ddb98bcf-wkhw6" Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.389745 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4wtc\" (UniqueName: \"kubernetes.io/projected/12e81bc4-0ca7-4b47-a35a-91c6e71fd42c-kube-api-access-c4wtc\") pod \"route-controller-manager-66ddb98bcf-wkhw6\" (UID: \"12e81bc4-0ca7-4b47-a35a-91c6e71fd42c\") " pod="openshift-route-controller-manager/route-controller-manager-66ddb98bcf-wkhw6" Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.391208 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a26b74bc-41ac-4d65-b833-cb269d199ddd-client-ca" (OuterVolumeSpecName: "client-ca") pod "a26b74bc-41ac-4d65-b833-cb269d199ddd" (UID: "a26b74bc-41ac-4d65-b833-cb269d199ddd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.391480 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a26b74bc-41ac-4d65-b833-cb269d199ddd-config" (OuterVolumeSpecName: "config") pod "a26b74bc-41ac-4d65-b833-cb269d199ddd" (UID: "a26b74bc-41ac-4d65-b833-cb269d199ddd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.399237 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a26b74bc-41ac-4d65-b833-cb269d199ddd-kube-api-access-ms5qw" (OuterVolumeSpecName: "kube-api-access-ms5qw") pod "a26b74bc-41ac-4d65-b833-cb269d199ddd" (UID: "a26b74bc-41ac-4d65-b833-cb269d199ddd"). InnerVolumeSpecName "kube-api-access-ms5qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.402001 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a26b74bc-41ac-4d65-b833-cb269d199ddd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a26b74bc-41ac-4d65-b833-cb269d199ddd" (UID: "a26b74bc-41ac-4d65-b833-cb269d199ddd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.490740 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12e81bc4-0ca7-4b47-a35a-91c6e71fd42c-client-ca\") pod \"route-controller-manager-66ddb98bcf-wkhw6\" (UID: \"12e81bc4-0ca7-4b47-a35a-91c6e71fd42c\") " pod="openshift-route-controller-manager/route-controller-manager-66ddb98bcf-wkhw6" Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.490818 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4wtc\" (UniqueName: \"kubernetes.io/projected/12e81bc4-0ca7-4b47-a35a-91c6e71fd42c-kube-api-access-c4wtc\") pod \"route-controller-manager-66ddb98bcf-wkhw6\" (UID: \"12e81bc4-0ca7-4b47-a35a-91c6e71fd42c\") " pod="openshift-route-controller-manager/route-controller-manager-66ddb98bcf-wkhw6" Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.490885 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12e81bc4-0ca7-4b47-a35a-91c6e71fd42c-config\") pod \"route-controller-manager-66ddb98bcf-wkhw6\" (UID: \"12e81bc4-0ca7-4b47-a35a-91c6e71fd42c\") " pod="openshift-route-controller-manager/route-controller-manager-66ddb98bcf-wkhw6" Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.490904 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12e81bc4-0ca7-4b47-a35a-91c6e71fd42c-serving-cert\") pod \"route-controller-manager-66ddb98bcf-wkhw6\" (UID: \"12e81bc4-0ca7-4b47-a35a-91c6e71fd42c\") " pod="openshift-route-controller-manager/route-controller-manager-66ddb98bcf-wkhw6" Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.491736 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a26b74bc-41ac-4d65-b833-cb269d199ddd-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.491937 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12e81bc4-0ca7-4b47-a35a-91c6e71fd42c-client-ca\") pod \"route-controller-manager-66ddb98bcf-wkhw6\" (UID: \"12e81bc4-0ca7-4b47-a35a-91c6e71fd42c\") " pod="openshift-route-controller-manager/route-controller-manager-66ddb98bcf-wkhw6" Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.492734 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a26b74bc-41ac-4d65-b833-cb269d199ddd-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.492765 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms5qw\" (UniqueName: \"kubernetes.io/projected/a26b74bc-41ac-4d65-b833-cb269d199ddd-kube-api-access-ms5qw\") on node \"crc\" DevicePath \"\"" Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.492781 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a26b74bc-41ac-4d65-b833-cb269d199ddd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.496158 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12e81bc4-0ca7-4b47-a35a-91c6e71fd42c-config\") pod \"route-controller-manager-66ddb98bcf-wkhw6\" (UID: \"12e81bc4-0ca7-4b47-a35a-91c6e71fd42c\") " pod="openshift-route-controller-manager/route-controller-manager-66ddb98bcf-wkhw6" Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.502734 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8d8bb5ffc-zk5q7"] Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.504538 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12e81bc4-0ca7-4b47-a35a-91c6e71fd42c-serving-cert\") pod \"route-controller-manager-66ddb98bcf-wkhw6\" (UID: \"12e81bc4-0ca7-4b47-a35a-91c6e71fd42c\") " pod="openshift-route-controller-manager/route-controller-manager-66ddb98bcf-wkhw6" Dec 13 03:48:34 crc kubenswrapper[4766]: W1213 03:48:34.510467 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce643eda_a1c4_46f1_b16e_d5aff12de5c8.slice/crio-b8fd1cd467fe2f9b921e57f98391e461646421b41ff9bdf8649d40eccbb0d6f6 WatchSource:0}: Error finding container b8fd1cd467fe2f9b921e57f98391e461646421b41ff9bdf8649d40eccbb0d6f6: Status 404 returned error can't find the container with id b8fd1cd467fe2f9b921e57f98391e461646421b41ff9bdf8649d40eccbb0d6f6 Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.512082 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4wtc\" (UniqueName: \"kubernetes.io/projected/12e81bc4-0ca7-4b47-a35a-91c6e71fd42c-kube-api-access-c4wtc\") pod \"route-controller-manager-66ddb98bcf-wkhw6\" (UID: \"12e81bc4-0ca7-4b47-a35a-91c6e71fd42c\") " pod="openshift-route-controller-manager/route-controller-manager-66ddb98bcf-wkhw6" Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.558303 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.669403 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.716140 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66ddb98bcf-wkhw6" Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.747820 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3f27eace-c54f-4937-876b-5d6d71b53682","Type":"ContainerStarted","Data":"2c581a8808174643d95bec8845120cf1b15ac47606b8e13605d65663d17507a3"} Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.748657 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"151a6cc7-14cd-4ac6-a65e-d165c2e8520f","Type":"ContainerStarted","Data":"4744b134aa296efc3655e6173ed1a92a19e21a686c4f2cae46d1d1ca1858dddf"} Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.750372 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69" event={"ID":"a26b74bc-41ac-4d65-b833-cb269d199ddd","Type":"ContainerDied","Data":"8889c39d0c6b8bc88cf243170fd590dc166fe5ea3466f653a63977c12b25178a"} Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.750410 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69" Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.750487 4766 scope.go:117] "RemoveContainer" containerID="c9fa0a8e90b61fa4b95214c45e56ba393c66a8d86ad42a513322d13e562b3e5b" Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.755842 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8d8bb5ffc-zk5q7" event={"ID":"ce643eda-a1c4-46f1-b16e-d5aff12de5c8","Type":"ContainerStarted","Data":"b8fd1cd467fe2f9b921e57f98391e461646421b41ff9bdf8649d40eccbb0d6f6"} Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.800190 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69"] Dec 13 03:48:34 crc kubenswrapper[4766]: I1213 03:48:34.803751 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kxg69"] Dec 13 03:48:35 crc kubenswrapper[4766]: I1213 03:48:35.041455 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-l2gzj"] Dec 13 03:48:35 crc kubenswrapper[4766]: I1213 03:48:35.170558 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66ddb98bcf-wkhw6"] Dec 13 03:48:35 crc kubenswrapper[4766]: W1213 03:48:35.182451 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12e81bc4_0ca7_4b47_a35a_91c6e71fd42c.slice/crio-717ae3ffc4c710744a57c375e4c2390457d02ccbcfa86ec2ef315b5b9d4932f7 WatchSource:0}: Error finding container 717ae3ffc4c710744a57c375e4c2390457d02ccbcfa86ec2ef315b5b9d4932f7: Status 404 returned error can't find the container with id 717ae3ffc4c710744a57c375e4c2390457d02ccbcfa86ec2ef315b5b9d4932f7 Dec 13 03:48:35 crc kubenswrapper[4766]: E1213 03:48:35.477513 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 13 03:48:35 crc kubenswrapper[4766]: E1213 03:48:35.477761 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qgvff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-wb4lm_openshift-marketplace(16b3386a-f52e-47b8-a0cb-172dd34f4761): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 13 03:48:35 crc kubenswrapper[4766]: E1213 03:48:35.478965 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-wb4lm" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" Dec 13 03:48:35 crc kubenswrapper[4766]: E1213 03:48:35.537095 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 13 03:48:35 crc kubenswrapper[4766]: E1213 03:48:35.537281 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68z9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-6dqhc_openshift-marketplace(f162311d-72df-42b4-b586-7bc1d4945c99): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 13 03:48:35 crc kubenswrapper[4766]: E1213 03:48:35.538500 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-6dqhc" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" Dec 13 03:48:35 crc kubenswrapper[4766]: I1213 03:48:35.624536 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a26b74bc-41ac-4d65-b833-cb269d199ddd" path="/var/lib/kubelet/pods/a26b74bc-41ac-4d65-b833-cb269d199ddd/volumes" Dec 13 03:48:35 crc kubenswrapper[4766]: I1213 03:48:35.764472 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" event={"ID":"71e6a48b-4f5d-4299-9c7b-98dbe11e670e","Type":"ContainerStarted","Data":"ca44fbdb60b7c4e21f7f23576ab2a08072b8d79ddd151b8f9523f542ad2e0779"} Dec 13 03:48:35 crc kubenswrapper[4766]: I1213 03:48:35.766221 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"151a6cc7-14cd-4ac6-a65e-d165c2e8520f","Type":"ContainerStarted","Data":"9c09fafd2652185b304016008d94651e7b185bc694ee212b6429a3a7b31a72da"} Dec 13 03:48:35 crc kubenswrapper[4766]: I1213 03:48:35.767558 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66ddb98bcf-wkhw6" event={"ID":"12e81bc4-0ca7-4b47-a35a-91c6e71fd42c","Type":"ContainerStarted","Data":"63f65e87c04fa23dd153075badbc11928c0282c8a61ab92c296c07508bd3b6f0"} Dec 13 03:48:35 crc kubenswrapper[4766]: I1213 03:48:35.767599 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66ddb98bcf-wkhw6" event={"ID":"12e81bc4-0ca7-4b47-a35a-91c6e71fd42c","Type":"ContainerStarted","Data":"717ae3ffc4c710744a57c375e4c2390457d02ccbcfa86ec2ef315b5b9d4932f7"} Dec 13 03:48:35 crc kubenswrapper[4766]: I1213 03:48:35.768460 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-66ddb98bcf-wkhw6" Dec 13 03:48:35 crc kubenswrapper[4766]: I1213 03:48:35.773475 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8d8bb5ffc-zk5q7" event={"ID":"ce643eda-a1c4-46f1-b16e-d5aff12de5c8","Type":"ContainerStarted","Data":"53f5cc19d65493c375120570bed2ca11e4189566f53c9d6161d1a97485c533e4"} Dec 13 03:48:35 crc kubenswrapper[4766]: I1213 03:48:35.773985 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8d8bb5ffc-zk5q7" Dec 13 03:48:35 crc kubenswrapper[4766]: I1213 03:48:35.780748 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3f27eace-c54f-4937-876b-5d6d71b53682","Type":"ContainerStarted","Data":"2ab0fe439eefc55239d1b75d4c4146ab9b2dc03bb7e6db1f7b5bff92032a0a5b"} Dec 13 03:48:35 crc kubenswrapper[4766]: E1213 03:48:35.783830 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-wb4lm" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" Dec 13 03:48:35 crc kubenswrapper[4766]: E1213 03:48:35.783916 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-6dqhc" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" Dec 13 03:48:35 crc kubenswrapper[4766]: I1213 03:48:35.787453 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8d8bb5ffc-zk5q7" Dec 13 03:48:35 crc kubenswrapper[4766]: I1213 03:48:35.828752 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=39.828726863 podStartE2EDuration="39.828726863s" podCreationTimestamp="2025-12-13 03:47:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:48:35.825339337 +0000 UTC m=+247.335272301" watchObservedRunningTime="2025-12-13 03:48:35.828726863 +0000 UTC m=+247.338659827" Dec 13 03:48:35 crc kubenswrapper[4766]: I1213 03:48:35.858602 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8d8bb5ffc-zk5q7" podStartSLOduration=29.858578909 podStartE2EDuration="29.858578909s" podCreationTimestamp="2025-12-13 03:48:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:48:35.85159169 +0000 UTC m=+247.361524654" watchObservedRunningTime="2025-12-13 03:48:35.858578909 +0000 UTC m=+247.368511873" Dec 13 03:48:35 crc kubenswrapper[4766]: I1213 03:48:35.887988 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=34.887962889 podStartE2EDuration="34.887962889s" podCreationTimestamp="2025-12-13 03:48:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:48:35.88702087 +0000 UTC m=+247.396953834" watchObservedRunningTime="2025-12-13 03:48:35.887962889 +0000 UTC m=+247.397895853" Dec 13 03:48:35 crc kubenswrapper[4766]: I1213 03:48:35.961574 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-66ddb98bcf-wkhw6" podStartSLOduration=29.961547764 podStartE2EDuration="29.961547764s" podCreationTimestamp="2025-12-13 03:48:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:48:35.959918123 +0000 UTC m=+247.469851107" watchObservedRunningTime="2025-12-13 03:48:35.961547764 +0000 UTC m=+247.471480718" Dec 13 03:48:36 crc kubenswrapper[4766]: I1213 03:48:36.299570 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-66ddb98bcf-wkhw6" Dec 13 03:48:36 crc kubenswrapper[4766]: I1213 03:48:36.789138 4766 generic.go:334] "Generic (PLEG): container finished" podID="3f27eace-c54f-4937-876b-5d6d71b53682" containerID="2ab0fe439eefc55239d1b75d4c4146ab9b2dc03bb7e6db1f7b5bff92032a0a5b" exitCode=0 Dec 13 03:48:36 crc kubenswrapper[4766]: I1213 03:48:36.790542 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3f27eace-c54f-4937-876b-5d6d71b53682","Type":"ContainerDied","Data":"2ab0fe439eefc55239d1b75d4c4146ab9b2dc03bb7e6db1f7b5bff92032a0a5b"} Dec 13 03:48:38 crc kubenswrapper[4766]: I1213 03:48:38.083824 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 13 03:48:38 crc kubenswrapper[4766]: I1213 03:48:38.256687 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f27eace-c54f-4937-876b-5d6d71b53682-kube-api-access\") pod \"3f27eace-c54f-4937-876b-5d6d71b53682\" (UID: \"3f27eace-c54f-4937-876b-5d6d71b53682\") " Dec 13 03:48:38 crc kubenswrapper[4766]: I1213 03:48:38.256847 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f27eace-c54f-4937-876b-5d6d71b53682-kubelet-dir\") pod \"3f27eace-c54f-4937-876b-5d6d71b53682\" (UID: \"3f27eace-c54f-4937-876b-5d6d71b53682\") " Dec 13 03:48:38 crc kubenswrapper[4766]: I1213 03:48:38.256973 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f27eace-c54f-4937-876b-5d6d71b53682-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3f27eace-c54f-4937-876b-5d6d71b53682" (UID: "3f27eace-c54f-4937-876b-5d6d71b53682"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:48:38 crc kubenswrapper[4766]: I1213 03:48:38.257753 4766 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f27eace-c54f-4937-876b-5d6d71b53682-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 13 03:48:38 crc kubenswrapper[4766]: I1213 03:48:38.267160 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f27eace-c54f-4937-876b-5d6d71b53682-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3f27eace-c54f-4937-876b-5d6d71b53682" (UID: "3f27eace-c54f-4937-876b-5d6d71b53682"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:48:38 crc kubenswrapper[4766]: I1213 03:48:38.359440 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f27eace-c54f-4937-876b-5d6d71b53682-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 13 03:48:38 crc kubenswrapper[4766]: I1213 03:48:38.807782 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3f27eace-c54f-4937-876b-5d6d71b53682","Type":"ContainerDied","Data":"2c581a8808174643d95bec8845120cf1b15ac47606b8e13605d65663d17507a3"} Dec 13 03:48:38 crc kubenswrapper[4766]: I1213 03:48:38.807833 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 13 03:48:38 crc kubenswrapper[4766]: I1213 03:48:38.807855 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c581a8808174643d95bec8845120cf1b15ac47606b8e13605d65663d17507a3" Dec 13 03:48:44 crc kubenswrapper[4766]: I1213 03:48:44.844265 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8d4vh" event={"ID":"b65de837-4baa-4aff-98cb-2babbdfdb2f5","Type":"ContainerStarted","Data":"29b7f5aa398c96c8afac60f36c7d2e6981fa9d2993913774b3898e829cf0dbcd"} Dec 13 03:48:45 crc kubenswrapper[4766]: I1213 03:48:45.853088 4766 generic.go:334] "Generic (PLEG): container finished" podID="b65de837-4baa-4aff-98cb-2babbdfdb2f5" containerID="29b7f5aa398c96c8afac60f36c7d2e6981fa9d2993913774b3898e829cf0dbcd" exitCode=0 Dec 13 03:48:45 crc kubenswrapper[4766]: I1213 03:48:45.853148 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8d4vh" event={"ID":"b65de837-4baa-4aff-98cb-2babbdfdb2f5","Type":"ContainerDied","Data":"29b7f5aa398c96c8afac60f36c7d2e6981fa9d2993913774b3898e829cf0dbcd"} Dec 13 03:48:46 crc kubenswrapper[4766]: I1213 03:48:46.327323 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8d8bb5ffc-zk5q7"] Dec 13 03:48:46 crc kubenswrapper[4766]: I1213 03:48:46.328569 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-8d8bb5ffc-zk5q7" podUID="ce643eda-a1c4-46f1-b16e-d5aff12de5c8" containerName="controller-manager" containerID="cri-o://53f5cc19d65493c375120570bed2ca11e4189566f53c9d6161d1a97485c533e4" gracePeriod=30 Dec 13 03:48:46 crc kubenswrapper[4766]: I1213 03:48:46.458261 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66ddb98bcf-wkhw6"] Dec 13 03:48:46 crc kubenswrapper[4766]: I1213 03:48:46.458549 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-66ddb98bcf-wkhw6" podUID="12e81bc4-0ca7-4b47-a35a-91c6e71fd42c" containerName="route-controller-manager" containerID="cri-o://63f65e87c04fa23dd153075badbc11928c0282c8a61ab92c296c07508bd3b6f0" gracePeriod=30 Dec 13 03:48:46 crc kubenswrapper[4766]: I1213 03:48:46.861031 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kq9t5" event={"ID":"116c9e00-53c9-4d49-813e-2f0dd2b24411","Type":"ContainerStarted","Data":"4b406ba1b4814d64750fc3482cb2671f9e78737570826d31e59918c69c0cfe18"} Dec 13 03:48:46 crc kubenswrapper[4766]: I1213 03:48:46.863153 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzmqb" event={"ID":"35afc352-db00-48b9-b888-6b8b9bc36403","Type":"ContainerStarted","Data":"855c45cc8a8626aa9fb7c0b5f22ea9216cc1a7414ddd4f008048095d2170d012"} Dec 13 03:48:46 crc kubenswrapper[4766]: I1213 03:48:46.867159 4766 generic.go:334] "Generic (PLEG): container finished" podID="ce643eda-a1c4-46f1-b16e-d5aff12de5c8" containerID="53f5cc19d65493c375120570bed2ca11e4189566f53c9d6161d1a97485c533e4" exitCode=0 Dec 13 03:48:46 crc kubenswrapper[4766]: I1213 03:48:46.867291 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8d8bb5ffc-zk5q7" event={"ID":"ce643eda-a1c4-46f1-b16e-d5aff12de5c8","Type":"ContainerDied","Data":"53f5cc19d65493c375120570bed2ca11e4189566f53c9d6161d1a97485c533e4"} Dec 13 03:48:46 crc kubenswrapper[4766]: I1213 03:48:46.870841 4766 generic.go:334] "Generic (PLEG): container finished" podID="12e81bc4-0ca7-4b47-a35a-91c6e71fd42c" containerID="63f65e87c04fa23dd153075badbc11928c0282c8a61ab92c296c07508bd3b6f0" exitCode=0 Dec 13 03:48:46 crc kubenswrapper[4766]: I1213 03:48:46.870907 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66ddb98bcf-wkhw6" event={"ID":"12e81bc4-0ca7-4b47-a35a-91c6e71fd42c","Type":"ContainerDied","Data":"63f65e87c04fa23dd153075badbc11928c0282c8a61ab92c296c07508bd3b6f0"} Dec 13 03:48:46 crc kubenswrapper[4766]: I1213 03:48:46.873951 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8d4vh" event={"ID":"b65de837-4baa-4aff-98cb-2babbdfdb2f5","Type":"ContainerStarted","Data":"f7d798871931121442b564bd42ce9229e9d8324b70a0cb38c7efe95be729ebe7"} Dec 13 03:48:46 crc kubenswrapper[4766]: I1213 03:48:46.878197 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7x6l" event={"ID":"456b5fc9-3dab-40fc-81c5-ab9ea1f110dd","Type":"ContainerStarted","Data":"12aef1a3a7fb5c27671ca060ddc3adb44ed400eeed55c5ecb3034609cdf90d76"} Dec 13 03:48:46 crc kubenswrapper[4766]: I1213 03:48:46.893815 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66ddb98bcf-wkhw6" Dec 13 03:48:46 crc kubenswrapper[4766]: I1213 03:48:46.896764 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8d8bb5ffc-zk5q7" Dec 13 03:48:46 crc kubenswrapper[4766]: I1213 03:48:46.974743 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8d4vh" podStartSLOduration=7.153745108 podStartE2EDuration="1m33.974718398s" podCreationTimestamp="2025-12-13 03:47:13 +0000 UTC" firstStartedPulling="2025-12-13 03:47:19.620678159 +0000 UTC m=+171.130611123" lastFinishedPulling="2025-12-13 03:48:46.441651449 +0000 UTC m=+257.951584413" observedRunningTime="2025-12-13 03:48:46.948502327 +0000 UTC m=+258.458435291" watchObservedRunningTime="2025-12-13 03:48:46.974718398 +0000 UTC m=+258.484651362" Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.028801 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4wtc\" (UniqueName: \"kubernetes.io/projected/12e81bc4-0ca7-4b47-a35a-91c6e71fd42c-kube-api-access-c4wtc\") pod \"12e81bc4-0ca7-4b47-a35a-91c6e71fd42c\" (UID: \"12e81bc4-0ca7-4b47-a35a-91c6e71fd42c\") " Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.028865 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdb4r\" (UniqueName: \"kubernetes.io/projected/ce643eda-a1c4-46f1-b16e-d5aff12de5c8-kube-api-access-pdb4r\") pod \"ce643eda-a1c4-46f1-b16e-d5aff12de5c8\" (UID: \"ce643eda-a1c4-46f1-b16e-d5aff12de5c8\") " Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.028916 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12e81bc4-0ca7-4b47-a35a-91c6e71fd42c-config\") pod \"12e81bc4-0ca7-4b47-a35a-91c6e71fd42c\" (UID: \"12e81bc4-0ca7-4b47-a35a-91c6e71fd42c\") " Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.028944 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ce643eda-a1c4-46f1-b16e-d5aff12de5c8-proxy-ca-bundles\") pod \"ce643eda-a1c4-46f1-b16e-d5aff12de5c8\" (UID: \"ce643eda-a1c4-46f1-b16e-d5aff12de5c8\") " Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.028971 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce643eda-a1c4-46f1-b16e-d5aff12de5c8-serving-cert\") pod \"ce643eda-a1c4-46f1-b16e-d5aff12de5c8\" (UID: \"ce643eda-a1c4-46f1-b16e-d5aff12de5c8\") " Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.028987 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce643eda-a1c4-46f1-b16e-d5aff12de5c8-config\") pod \"ce643eda-a1c4-46f1-b16e-d5aff12de5c8\" (UID: \"ce643eda-a1c4-46f1-b16e-d5aff12de5c8\") " Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.029014 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce643eda-a1c4-46f1-b16e-d5aff12de5c8-client-ca\") pod \"ce643eda-a1c4-46f1-b16e-d5aff12de5c8\" (UID: \"ce643eda-a1c4-46f1-b16e-d5aff12de5c8\") " Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.029032 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12e81bc4-0ca7-4b47-a35a-91c6e71fd42c-client-ca\") pod \"12e81bc4-0ca7-4b47-a35a-91c6e71fd42c\" (UID: \"12e81bc4-0ca7-4b47-a35a-91c6e71fd42c\") " Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.029061 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12e81bc4-0ca7-4b47-a35a-91c6e71fd42c-serving-cert\") pod \"12e81bc4-0ca7-4b47-a35a-91c6e71fd42c\" (UID: \"12e81bc4-0ca7-4b47-a35a-91c6e71fd42c\") " Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.029771 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce643eda-a1c4-46f1-b16e-d5aff12de5c8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ce643eda-a1c4-46f1-b16e-d5aff12de5c8" (UID: "ce643eda-a1c4-46f1-b16e-d5aff12de5c8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.032694 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12e81bc4-0ca7-4b47-a35a-91c6e71fd42c-client-ca" (OuterVolumeSpecName: "client-ca") pod "12e81bc4-0ca7-4b47-a35a-91c6e71fd42c" (UID: "12e81bc4-0ca7-4b47-a35a-91c6e71fd42c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.033159 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12e81bc4-0ca7-4b47-a35a-91c6e71fd42c-config" (OuterVolumeSpecName: "config") pod "12e81bc4-0ca7-4b47-a35a-91c6e71fd42c" (UID: "12e81bc4-0ca7-4b47-a35a-91c6e71fd42c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.033687 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce643eda-a1c4-46f1-b16e-d5aff12de5c8-client-ca" (OuterVolumeSpecName: "client-ca") pod "ce643eda-a1c4-46f1-b16e-d5aff12de5c8" (UID: "ce643eda-a1c4-46f1-b16e-d5aff12de5c8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.034936 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce643eda-a1c4-46f1-b16e-d5aff12de5c8-config" (OuterVolumeSpecName: "config") pod "ce643eda-a1c4-46f1-b16e-d5aff12de5c8" (UID: "ce643eda-a1c4-46f1-b16e-d5aff12de5c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.036098 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce643eda-a1c4-46f1-b16e-d5aff12de5c8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ce643eda-a1c4-46f1-b16e-d5aff12de5c8" (UID: "ce643eda-a1c4-46f1-b16e-d5aff12de5c8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.036213 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12e81bc4-0ca7-4b47-a35a-91c6e71fd42c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "12e81bc4-0ca7-4b47-a35a-91c6e71fd42c" (UID: "12e81bc4-0ca7-4b47-a35a-91c6e71fd42c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.036275 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce643eda-a1c4-46f1-b16e-d5aff12de5c8-kube-api-access-pdb4r" (OuterVolumeSpecName: "kube-api-access-pdb4r") pod "ce643eda-a1c4-46f1-b16e-d5aff12de5c8" (UID: "ce643eda-a1c4-46f1-b16e-d5aff12de5c8"). InnerVolumeSpecName "kube-api-access-pdb4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.036737 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12e81bc4-0ca7-4b47-a35a-91c6e71fd42c-kube-api-access-c4wtc" (OuterVolumeSpecName: "kube-api-access-c4wtc") pod "12e81bc4-0ca7-4b47-a35a-91c6e71fd42c" (UID: "12e81bc4-0ca7-4b47-a35a-91c6e71fd42c"). InnerVolumeSpecName "kube-api-access-c4wtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.130172 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce643eda-a1c4-46f1-b16e-d5aff12de5c8-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.130214 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce643eda-a1c4-46f1-b16e-d5aff12de5c8-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.130251 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce643eda-a1c4-46f1-b16e-d5aff12de5c8-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.130270 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12e81bc4-0ca7-4b47-a35a-91c6e71fd42c-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.130295 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12e81bc4-0ca7-4b47-a35a-91c6e71fd42c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.130307 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4wtc\" (UniqueName: \"kubernetes.io/projected/12e81bc4-0ca7-4b47-a35a-91c6e71fd42c-kube-api-access-c4wtc\") on node \"crc\" DevicePath \"\"" Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.130318 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdb4r\" (UniqueName: \"kubernetes.io/projected/ce643eda-a1c4-46f1-b16e-d5aff12de5c8-kube-api-access-pdb4r\") on node \"crc\" DevicePath \"\"" Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.130326 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12e81bc4-0ca7-4b47-a35a-91c6e71fd42c-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.130334 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ce643eda-a1c4-46f1-b16e-d5aff12de5c8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.885517 4766 generic.go:334] "Generic (PLEG): container finished" podID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" containerID="12aef1a3a7fb5c27671ca060ddc3adb44ed400eeed55c5ecb3034609cdf90d76" exitCode=0 Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.885597 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7x6l" event={"ID":"456b5fc9-3dab-40fc-81c5-ab9ea1f110dd","Type":"ContainerDied","Data":"12aef1a3a7fb5c27671ca060ddc3adb44ed400eeed55c5ecb3034609cdf90d76"} Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.888481 4766 generic.go:334] "Generic (PLEG): container finished" podID="116c9e00-53c9-4d49-813e-2f0dd2b24411" containerID="4b406ba1b4814d64750fc3482cb2671f9e78737570826d31e59918c69c0cfe18" exitCode=0 Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.888630 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kq9t5" event={"ID":"116c9e00-53c9-4d49-813e-2f0dd2b24411","Type":"ContainerDied","Data":"4b406ba1b4814d64750fc3482cb2671f9e78737570826d31e59918c69c0cfe18"} Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.891274 4766 generic.go:334] "Generic (PLEG): container finished" podID="35afc352-db00-48b9-b888-6b8b9bc36403" containerID="855c45cc8a8626aa9fb7c0b5f22ea9216cc1a7414ddd4f008048095d2170d012" exitCode=0 Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.891349 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzmqb" event={"ID":"35afc352-db00-48b9-b888-6b8b9bc36403","Type":"ContainerDied","Data":"855c45cc8a8626aa9fb7c0b5f22ea9216cc1a7414ddd4f008048095d2170d012"} Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.900176 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8d8bb5ffc-zk5q7" event={"ID":"ce643eda-a1c4-46f1-b16e-d5aff12de5c8","Type":"ContainerDied","Data":"b8fd1cd467fe2f9b921e57f98391e461646421b41ff9bdf8649d40eccbb0d6f6"} Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.900207 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8d8bb5ffc-zk5q7" Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.900252 4766 scope.go:117] "RemoveContainer" containerID="53f5cc19d65493c375120570bed2ca11e4189566f53c9d6161d1a97485c533e4" Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.902743 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66ddb98bcf-wkhw6" event={"ID":"12e81bc4-0ca7-4b47-a35a-91c6e71fd42c","Type":"ContainerDied","Data":"717ae3ffc4c710744a57c375e4c2390457d02ccbcfa86ec2ef315b5b9d4932f7"} Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.902798 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66ddb98bcf-wkhw6" Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.918517 4766 scope.go:117] "RemoveContainer" containerID="63f65e87c04fa23dd153075badbc11928c0282c8a61ab92c296c07508bd3b6f0" Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.959274 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66ddb98bcf-wkhw6"] Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.962449 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66ddb98bcf-wkhw6"] Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.972761 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8d8bb5ffc-zk5q7"] Dec 13 03:48:47 crc kubenswrapper[4766]: I1213 03:48:47.976838 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8d8bb5ffc-zk5q7"] Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.109113 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6c4b47bc-vshkm"] Dec 13 03:48:48 crc kubenswrapper[4766]: E1213 03:48:48.109706 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce643eda-a1c4-46f1-b16e-d5aff12de5c8" containerName="controller-manager" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.109804 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce643eda-a1c4-46f1-b16e-d5aff12de5c8" containerName="controller-manager" Dec 13 03:48:48 crc kubenswrapper[4766]: E1213 03:48:48.109911 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f27eace-c54f-4937-876b-5d6d71b53682" containerName="pruner" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.110009 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f27eace-c54f-4937-876b-5d6d71b53682" containerName="pruner" Dec 13 03:48:48 crc kubenswrapper[4766]: E1213 03:48:48.110110 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12e81bc4-0ca7-4b47-a35a-91c6e71fd42c" containerName="route-controller-manager" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.110248 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="12e81bc4-0ca7-4b47-a35a-91c6e71fd42c" containerName="route-controller-manager" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.110416 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce643eda-a1c4-46f1-b16e-d5aff12de5c8" containerName="controller-manager" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.110525 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="12e81bc4-0ca7-4b47-a35a-91c6e71fd42c" containerName="route-controller-manager" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.110614 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f27eace-c54f-4937-876b-5d6d71b53682" containerName="pruner" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.111173 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.116228 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.116724 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.116928 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.117116 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.117248 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.117450 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.118367 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2"] Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.119855 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.120746 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6c4b47bc-vshkm"] Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.124157 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.124462 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.124711 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.124768 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.124868 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.125681 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.127353 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.135194 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2"] Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.362834 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4724b2a-c46a-4d7b-8227-6a15e978dd16-config\") pod \"route-controller-manager-5c884cd845-kkzq2\" (UID: \"c4724b2a-c46a-4d7b-8227-6a15e978dd16\") " pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.362911 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h44rs\" (UniqueName: \"kubernetes.io/projected/3f461109-9a3d-4f7a-9265-1db9a81122e5-kube-api-access-h44rs\") pod \"controller-manager-6c4b47bc-vshkm\" (UID: \"3f461109-9a3d-4f7a-9265-1db9a81122e5\") " pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.362941 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f461109-9a3d-4f7a-9265-1db9a81122e5-proxy-ca-bundles\") pod \"controller-manager-6c4b47bc-vshkm\" (UID: \"3f461109-9a3d-4f7a-9265-1db9a81122e5\") " pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.362979 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f461109-9a3d-4f7a-9265-1db9a81122e5-config\") pod \"controller-manager-6c4b47bc-vshkm\" (UID: \"3f461109-9a3d-4f7a-9265-1db9a81122e5\") " pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.363017 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f461109-9a3d-4f7a-9265-1db9a81122e5-serving-cert\") pod \"controller-manager-6c4b47bc-vshkm\" (UID: \"3f461109-9a3d-4f7a-9265-1db9a81122e5\") " pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.363047 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k6t5\" (UniqueName: \"kubernetes.io/projected/c4724b2a-c46a-4d7b-8227-6a15e978dd16-kube-api-access-9k6t5\") pod \"route-controller-manager-5c884cd845-kkzq2\" (UID: \"c4724b2a-c46a-4d7b-8227-6a15e978dd16\") " pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.363120 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f461109-9a3d-4f7a-9265-1db9a81122e5-client-ca\") pod \"controller-manager-6c4b47bc-vshkm\" (UID: \"3f461109-9a3d-4f7a-9265-1db9a81122e5\") " pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.363150 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4724b2a-c46a-4d7b-8227-6a15e978dd16-serving-cert\") pod \"route-controller-manager-5c884cd845-kkzq2\" (UID: \"c4724b2a-c46a-4d7b-8227-6a15e978dd16\") " pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.363194 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4724b2a-c46a-4d7b-8227-6a15e978dd16-client-ca\") pod \"route-controller-manager-5c884cd845-kkzq2\" (UID: \"c4724b2a-c46a-4d7b-8227-6a15e978dd16\") " pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.464172 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4724b2a-c46a-4d7b-8227-6a15e978dd16-config\") pod \"route-controller-manager-5c884cd845-kkzq2\" (UID: \"c4724b2a-c46a-4d7b-8227-6a15e978dd16\") " pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.464229 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h44rs\" (UniqueName: \"kubernetes.io/projected/3f461109-9a3d-4f7a-9265-1db9a81122e5-kube-api-access-h44rs\") pod \"controller-manager-6c4b47bc-vshkm\" (UID: \"3f461109-9a3d-4f7a-9265-1db9a81122e5\") " pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.464262 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f461109-9a3d-4f7a-9265-1db9a81122e5-proxy-ca-bundles\") pod \"controller-manager-6c4b47bc-vshkm\" (UID: \"3f461109-9a3d-4f7a-9265-1db9a81122e5\") " pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.464296 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f461109-9a3d-4f7a-9265-1db9a81122e5-config\") pod \"controller-manager-6c4b47bc-vshkm\" (UID: \"3f461109-9a3d-4f7a-9265-1db9a81122e5\") " pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.464319 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f461109-9a3d-4f7a-9265-1db9a81122e5-serving-cert\") pod \"controller-manager-6c4b47bc-vshkm\" (UID: \"3f461109-9a3d-4f7a-9265-1db9a81122e5\") " pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.464358 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k6t5\" (UniqueName: \"kubernetes.io/projected/c4724b2a-c46a-4d7b-8227-6a15e978dd16-kube-api-access-9k6t5\") pod \"route-controller-manager-5c884cd845-kkzq2\" (UID: \"c4724b2a-c46a-4d7b-8227-6a15e978dd16\") " pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.464393 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f461109-9a3d-4f7a-9265-1db9a81122e5-client-ca\") pod \"controller-manager-6c4b47bc-vshkm\" (UID: \"3f461109-9a3d-4f7a-9265-1db9a81122e5\") " pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.464413 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4724b2a-c46a-4d7b-8227-6a15e978dd16-serving-cert\") pod \"route-controller-manager-5c884cd845-kkzq2\" (UID: \"c4724b2a-c46a-4d7b-8227-6a15e978dd16\") " pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.464477 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4724b2a-c46a-4d7b-8227-6a15e978dd16-client-ca\") pod \"route-controller-manager-5c884cd845-kkzq2\" (UID: \"c4724b2a-c46a-4d7b-8227-6a15e978dd16\") " pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.465632 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4724b2a-c46a-4d7b-8227-6a15e978dd16-client-ca\") pod \"route-controller-manager-5c884cd845-kkzq2\" (UID: \"c4724b2a-c46a-4d7b-8227-6a15e978dd16\") " pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.465868 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4724b2a-c46a-4d7b-8227-6a15e978dd16-config\") pod \"route-controller-manager-5c884cd845-kkzq2\" (UID: \"c4724b2a-c46a-4d7b-8227-6a15e978dd16\") " pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.466458 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f461109-9a3d-4f7a-9265-1db9a81122e5-client-ca\") pod \"controller-manager-6c4b47bc-vshkm\" (UID: \"3f461109-9a3d-4f7a-9265-1db9a81122e5\") " pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.467702 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f461109-9a3d-4f7a-9265-1db9a81122e5-proxy-ca-bundles\") pod \"controller-manager-6c4b47bc-vshkm\" (UID: \"3f461109-9a3d-4f7a-9265-1db9a81122e5\") " pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.468263 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f461109-9a3d-4f7a-9265-1db9a81122e5-config\") pod \"controller-manager-6c4b47bc-vshkm\" (UID: \"3f461109-9a3d-4f7a-9265-1db9a81122e5\") " pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.475572 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f461109-9a3d-4f7a-9265-1db9a81122e5-serving-cert\") pod \"controller-manager-6c4b47bc-vshkm\" (UID: \"3f461109-9a3d-4f7a-9265-1db9a81122e5\") " pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.476113 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4724b2a-c46a-4d7b-8227-6a15e978dd16-serving-cert\") pod \"route-controller-manager-5c884cd845-kkzq2\" (UID: \"c4724b2a-c46a-4d7b-8227-6a15e978dd16\") " pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.490713 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k6t5\" (UniqueName: \"kubernetes.io/projected/c4724b2a-c46a-4d7b-8227-6a15e978dd16-kube-api-access-9k6t5\") pod \"route-controller-manager-5c884cd845-kkzq2\" (UID: \"c4724b2a-c46a-4d7b-8227-6a15e978dd16\") " pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.494762 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h44rs\" (UniqueName: \"kubernetes.io/projected/3f461109-9a3d-4f7a-9265-1db9a81122e5-kube-api-access-h44rs\") pod \"controller-manager-6c4b47bc-vshkm\" (UID: \"3f461109-9a3d-4f7a-9265-1db9a81122e5\") " pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.671355 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.739134 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.953455 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2"] Dec 13 03:48:48 crc kubenswrapper[4766]: W1213 03:48:48.960332 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4724b2a_c46a_4d7b_8227_6a15e978dd16.slice/crio-35d7947069747b67558b41cee9306b0b684e6dbc1f3f65188cec7f1934c34bb3 WatchSource:0}: Error finding container 35d7947069747b67558b41cee9306b0b684e6dbc1f3f65188cec7f1934c34bb3: Status 404 returned error can't find the container with id 35d7947069747b67558b41cee9306b0b684e6dbc1f3f65188cec7f1934c34bb3 Dec 13 03:48:48 crc kubenswrapper[4766]: I1213 03:48:48.992363 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6c4b47bc-vshkm"] Dec 13 03:48:49 crc kubenswrapper[4766]: W1213 03:48:49.000255 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f461109_9a3d_4f7a_9265_1db9a81122e5.slice/crio-93a46cf412c7066851c18ae87582816734ef215bfffd1b3aa511740ddc126fc2 WatchSource:0}: Error finding container 93a46cf412c7066851c18ae87582816734ef215bfffd1b3aa511740ddc126fc2: Status 404 returned error can't find the container with id 93a46cf412c7066851c18ae87582816734ef215bfffd1b3aa511740ddc126fc2 Dec 13 03:48:49 crc kubenswrapper[4766]: I1213 03:48:49.630114 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12e81bc4-0ca7-4b47-a35a-91c6e71fd42c" path="/var/lib/kubelet/pods/12e81bc4-0ca7-4b47-a35a-91c6e71fd42c/volumes" Dec 13 03:48:49 crc kubenswrapper[4766]: I1213 03:48:49.631364 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce643eda-a1c4-46f1-b16e-d5aff12de5c8" path="/var/lib/kubelet/pods/ce643eda-a1c4-46f1-b16e-d5aff12de5c8/volumes" Dec 13 03:48:49 crc kubenswrapper[4766]: I1213 03:48:49.922239 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" event={"ID":"3f461109-9a3d-4f7a-9265-1db9a81122e5","Type":"ContainerStarted","Data":"93a46cf412c7066851c18ae87582816734ef215bfffd1b3aa511740ddc126fc2"} Dec 13 03:48:49 crc kubenswrapper[4766]: I1213 03:48:49.923808 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" event={"ID":"c4724b2a-c46a-4d7b-8227-6a15e978dd16","Type":"ContainerStarted","Data":"35d7947069747b67558b41cee9306b0b684e6dbc1f3f65188cec7f1934c34bb3"} Dec 13 03:48:50 crc kubenswrapper[4766]: I1213 03:48:50.952826 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" event={"ID":"c4724b2a-c46a-4d7b-8227-6a15e978dd16","Type":"ContainerStarted","Data":"d4e7ed6e0f7fa0185b3d7cdde59f6541e344873cb13cec7d5162a61c808874d8"} Dec 13 03:48:50 crc kubenswrapper[4766]: I1213 03:48:50.954631 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" event={"ID":"3f461109-9a3d-4f7a-9265-1db9a81122e5","Type":"ContainerStarted","Data":"6538a26093144940a504017b776f19137fd882fe7e82e4ab40b6a106c45167e1"} Dec 13 03:48:51 crc kubenswrapper[4766]: I1213 03:48:51.960262 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" Dec 13 03:48:51 crc kubenswrapper[4766]: I1213 03:48:51.966787 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" Dec 13 03:48:51 crc kubenswrapper[4766]: I1213 03:48:51.983510 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" podStartSLOduration=5.983489645 podStartE2EDuration="5.983489645s" podCreationTimestamp="2025-12-13 03:48:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:48:51.979299052 +0000 UTC m=+263.489232036" watchObservedRunningTime="2025-12-13 03:48:51.983489645 +0000 UTC m=+263.493422609" Dec 13 03:48:52 crc kubenswrapper[4766]: I1213 03:48:52.023415 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" podStartSLOduration=6.023392873 podStartE2EDuration="6.023392873s" podCreationTimestamp="2025-12-13 03:48:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:48:52.021317162 +0000 UTC m=+263.531250136" watchObservedRunningTime="2025-12-13 03:48:52.023392873 +0000 UTC m=+263.533325847" Dec 13 03:48:54 crc kubenswrapper[4766]: I1213 03:48:54.307632 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8d4vh" Dec 13 03:48:54 crc kubenswrapper[4766]: I1213 03:48:54.309170 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8d4vh" Dec 13 03:48:54 crc kubenswrapper[4766]: I1213 03:48:54.540725 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8d4vh" Dec 13 03:48:55 crc kubenswrapper[4766]: I1213 03:48:55.013189 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8d4vh" Dec 13 03:48:58 crc kubenswrapper[4766]: I1213 03:48:58.672545 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" Dec 13 03:48:58 crc kubenswrapper[4766]: I1213 03:48:58.684733 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" Dec 13 03:49:00 crc kubenswrapper[4766]: I1213 03:49:00.131497 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" containerName="oauth-openshift" containerID="cri-o://f908d52b65e3fdb94e922b78d11b009b195f7302142cf7a20c6d8facb7e60100" gracePeriod=15 Dec 13 03:49:04 crc kubenswrapper[4766]: I1213 03:49:04.037926 4766 generic.go:334] "Generic (PLEG): container finished" podID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" containerID="f908d52b65e3fdb94e922b78d11b009b195f7302142cf7a20c6d8facb7e60100" exitCode=0 Dec 13 03:49:04 crc kubenswrapper[4766]: I1213 03:49:04.038021 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" event={"ID":"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a","Type":"ContainerDied","Data":"f908d52b65e3fdb94e922b78d11b009b195f7302142cf7a20c6d8facb7e60100"} Dec 13 03:49:06 crc kubenswrapper[4766]: I1213 03:49:06.336132 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6c4b47bc-vshkm"] Dec 13 03:49:06 crc kubenswrapper[4766]: I1213 03:49:06.337657 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" containerName="controller-manager" containerID="cri-o://6538a26093144940a504017b776f19137fd882fe7e82e4ab40b6a106c45167e1" gracePeriod=30 Dec 13 03:49:06 crc kubenswrapper[4766]: I1213 03:49:06.352467 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2"] Dec 13 03:49:06 crc kubenswrapper[4766]: I1213 03:49:06.352734 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" containerName="route-controller-manager" containerID="cri-o://d4e7ed6e0f7fa0185b3d7cdde59f6541e344873cb13cec7d5162a61c808874d8" gracePeriod=30 Dec 13 03:49:08 crc kubenswrapper[4766]: I1213 03:49:08.066119 4766 generic.go:334] "Generic (PLEG): container finished" podID="3f461109-9a3d-4f7a-9265-1db9a81122e5" containerID="6538a26093144940a504017b776f19137fd882fe7e82e4ab40b6a106c45167e1" exitCode=0 Dec 13 03:49:08 crc kubenswrapper[4766]: I1213 03:49:08.066235 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" event={"ID":"3f461109-9a3d-4f7a-9265-1db9a81122e5","Type":"ContainerDied","Data":"6538a26093144940a504017b776f19137fd882fe7e82e4ab40b6a106c45167e1"} Dec 13 03:49:08 crc kubenswrapper[4766]: I1213 03:49:08.068989 4766 generic.go:334] "Generic (PLEG): container finished" podID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" containerID="d4e7ed6e0f7fa0185b3d7cdde59f6541e344873cb13cec7d5162a61c808874d8" exitCode=0 Dec 13 03:49:08 crc kubenswrapper[4766]: I1213 03:49:08.069041 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" event={"ID":"c4724b2a-c46a-4d7b-8227-6a15e978dd16","Type":"ContainerDied","Data":"d4e7ed6e0f7fa0185b3d7cdde59f6541e344873cb13cec7d5162a61c808874d8"} Dec 13 03:49:08 crc kubenswrapper[4766]: I1213 03:49:08.080378 4766 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-l2gzj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" start-of-body= Dec 13 03:49:08 crc kubenswrapper[4766]: I1213 03:49:08.080494 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" Dec 13 03:49:08 crc kubenswrapper[4766]: I1213 03:49:08.672405 4766 patch_prober.go:28] interesting pod/route-controller-manager-5c884cd845-kkzq2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Dec 13 03:49:08 crc kubenswrapper[4766]: I1213 03:49:08.672740 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Dec 13 03:49:08 crc kubenswrapper[4766]: I1213 03:49:08.739858 4766 patch_prober.go:28] interesting pod/controller-manager-6c4b47bc-vshkm container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Dec 13 03:49:08 crc kubenswrapper[4766]: I1213 03:49:08.739926 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" Dec 13 03:49:12 crc kubenswrapper[4766]: I1213 03:49:12.882122 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:49:12 crc kubenswrapper[4766]: I1213 03:49:12.944612 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-f6658f7c8-7lmh4"] Dec 13 03:49:12 crc kubenswrapper[4766]: E1213 03:49:12.945582 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" containerName="oauth-openshift" Dec 13 03:49:12 crc kubenswrapper[4766]: I1213 03:49:12.945683 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" containerName="oauth-openshift" Dec 13 03:49:12 crc kubenswrapper[4766]: I1213 03:49:12.946302 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" containerName="oauth-openshift" Dec 13 03:49:12 crc kubenswrapper[4766]: I1213 03:49:12.947473 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:12 crc kubenswrapper[4766]: I1213 03:49:12.992831 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" Dec 13 03:49:12 crc kubenswrapper[4766]: I1213 03:49:12.994124 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-f6658f7c8-7lmh4"] Dec 13 03:49:12 crc kubenswrapper[4766]: I1213 03:49:12.996141 4766 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 13 03:49:12 crc kubenswrapper[4766]: E1213 03:49:12.996528 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" containerName="route-controller-manager" Dec 13 03:49:12 crc kubenswrapper[4766]: I1213 03:49:12.996547 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" containerName="route-controller-manager" Dec 13 03:49:12 crc kubenswrapper[4766]: I1213 03:49:12.996709 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" containerName="route-controller-manager" Dec 13 03:49:12 crc kubenswrapper[4766]: I1213 03:49:12.997210 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 03:49:12 crc kubenswrapper[4766]: I1213 03:49:12.997842 4766 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 13 03:49:12 crc kubenswrapper[4766]: I1213 03:49:12.998162 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c" gracePeriod=15 Dec 13 03:49:12 crc kubenswrapper[4766]: I1213 03:49:12.998218 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d" gracePeriod=15 Dec 13 03:49:12 crc kubenswrapper[4766]: I1213 03:49:12.998232 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://3c82736d8d5e400aa9f77b6e3481b75b3127380fbfe2ab6b998360b771a41dc8" gracePeriod=15 Dec 13 03:49:12 crc kubenswrapper[4766]: I1213 03:49:12.998248 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339" gracePeriod=15 Dec 13 03:49:12 crc kubenswrapper[4766]: I1213 03:49:12.998176 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0" gracePeriod=15 Dec 13 03:49:12 crc kubenswrapper[4766]: I1213 03:49:12.999757 4766 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 13 03:49:13 crc kubenswrapper[4766]: E1213 03:49:13.000050 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.000067 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 13 03:49:13 crc kubenswrapper[4766]: E1213 03:49:13.000083 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.000091 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 13 03:49:13 crc kubenswrapper[4766]: E1213 03:49:13.000100 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.000106 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 13 03:49:13 crc kubenswrapper[4766]: E1213 03:49:13.000115 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.000121 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 13 03:49:13 crc kubenswrapper[4766]: E1213 03:49:13.000133 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.000139 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 13 03:49:13 crc kubenswrapper[4766]: E1213 03:49:13.000150 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.000156 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 13 03:49:13 crc kubenswrapper[4766]: E1213 03:49:13.000169 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.000175 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.000322 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.000332 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.000341 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.000347 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.000358 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.000369 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 13 03:49:13 crc kubenswrapper[4766]: E1213 03:49:13.000488 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.000496 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.000601 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.001003 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-user-template-error\") pod \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.001109 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-trusted-ca-bundle\") pod \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.001133 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-serving-cert\") pod \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.001157 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js9d4\" (UniqueName: \"kubernetes.io/projected/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-kube-api-access-js9d4\") pod \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.001185 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-service-ca\") pod \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.001204 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-session\") pod \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.001244 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-audit-dir\") pod \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.001276 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-user-template-provider-selection\") pod \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.001292 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-ocp-branding-template\") pod \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.001331 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-user-idp-0-file-data\") pod \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.001366 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-cliconfig\") pod \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.001389 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-user-template-login\") pod \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.001465 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-router-certs\") pod \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.001501 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-audit-policies\") pod \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\" (UID: \"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a\") " Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.002787 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" (UID: "ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.002884 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" (UID: "ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.003547 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" (UID: "ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.004345 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" (UID: "ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.013482 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" (UID: "ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.014583 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" (UID: "ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.016074 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" (UID: "ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.016365 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" (UID: "ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.019159 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-kube-api-access-js9d4" (OuterVolumeSpecName: "kube-api-access-js9d4") pod "ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" (UID: "ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a"). InnerVolumeSpecName "kube-api-access-js9d4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.019380 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" (UID: "ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.025512 4766 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.039631 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" (UID: "ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.041562 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" (UID: "ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.043794 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" (UID: "ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.055968 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" (UID: "ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:49:13 crc kubenswrapper[4766]: E1213 03:49:13.072889 4766 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.212:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.100888 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" event={"ID":"ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a","Type":"ContainerDied","Data":"233cae5fdbbef271099572cc9f46f4e02df9787b26f61f577adb6203dc172224"} Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.100952 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.100965 4766 scope.go:117] "RemoveContainer" containerID="f908d52b65e3fdb94e922b78d11b009b195f7302142cf7a20c6d8facb7e60100" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.103568 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4724b2a-c46a-4d7b-8227-6a15e978dd16-serving-cert\") pod \"c4724b2a-c46a-4d7b-8227-6a15e978dd16\" (UID: \"c4724b2a-c46a-4d7b-8227-6a15e978dd16\") " Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.103653 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4724b2a-c46a-4d7b-8227-6a15e978dd16-config\") pod \"c4724b2a-c46a-4d7b-8227-6a15e978dd16\" (UID: \"c4724b2a-c46a-4d7b-8227-6a15e978dd16\") " Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.103717 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9k6t5\" (UniqueName: \"kubernetes.io/projected/c4724b2a-c46a-4d7b-8227-6a15e978dd16-kube-api-access-9k6t5\") pod \"c4724b2a-c46a-4d7b-8227-6a15e978dd16\" (UID: \"c4724b2a-c46a-4d7b-8227-6a15e978dd16\") " Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.103864 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4724b2a-c46a-4d7b-8227-6a15e978dd16-client-ca\") pod \"c4724b2a-c46a-4d7b-8227-6a15e978dd16\" (UID: \"c4724b2a-c46a-4d7b-8227-6a15e978dd16\") " Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104085 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19086cc5-bade-4f53-90cc-0370b7d2f6a9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104134 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104164 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/19086cc5-bade-4f53-90cc-0370b7d2f6a9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104186 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/19086cc5-bade-4f53-90cc-0370b7d2f6a9-v4-0-config-system-router-certs\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104221 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104241 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104269 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/19086cc5-bade-4f53-90cc-0370b7d2f6a9-v4-0-config-system-session\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104336 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/19086cc5-bade-4f53-90cc-0370b7d2f6a9-v4-0-config-user-template-login\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104376 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/19086cc5-bade-4f53-90cc-0370b7d2f6a9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104398 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104419 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/19086cc5-bade-4f53-90cc-0370b7d2f6a9-audit-dir\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104460 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/19086cc5-bade-4f53-90cc-0370b7d2f6a9-audit-policies\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104489 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104531 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/19086cc5-bade-4f53-90cc-0370b7d2f6a9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104589 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104609 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104631 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz6bv\" (UniqueName: \"kubernetes.io/projected/19086cc5-bade-4f53-90cc-0370b7d2f6a9-kube-api-access-sz6bv\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104654 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/19086cc5-bade-4f53-90cc-0370b7d2f6a9-v4-0-config-user-template-error\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104681 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104696 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/19086cc5-bade-4f53-90cc-0370b7d2f6a9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104714 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/19086cc5-bade-4f53-90cc-0370b7d2f6a9-v4-0-config-system-service-ca\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104735 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/19086cc5-bade-4f53-90cc-0370b7d2f6a9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104795 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104806 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104818 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js9d4\" (UniqueName: \"kubernetes.io/projected/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-kube-api-access-js9d4\") on node \"crc\" DevicePath \"\"" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104830 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104840 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104853 4766 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104862 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104874 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104888 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104904 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104918 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104928 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104937 4766 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.104949 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.105826 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.106743 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4724b2a-c46a-4d7b-8227-6a15e978dd16-client-ca" (OuterVolumeSpecName: "client-ca") pod "c4724b2a-c46a-4d7b-8227-6a15e978dd16" (UID: "c4724b2a-c46a-4d7b-8227-6a15e978dd16"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.106890 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4724b2a-c46a-4d7b-8227-6a15e978dd16-config" (OuterVolumeSpecName: "config") pod "c4724b2a-c46a-4d7b-8227-6a15e978dd16" (UID: "c4724b2a-c46a-4d7b-8227-6a15e978dd16"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.112383 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4724b2a-c46a-4d7b-8227-6a15e978dd16-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c4724b2a-c46a-4d7b-8227-6a15e978dd16" (UID: "c4724b2a-c46a-4d7b-8227-6a15e978dd16"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.114533 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4724b2a-c46a-4d7b-8227-6a15e978dd16-kube-api-access-9k6t5" (OuterVolumeSpecName: "kube-api-access-9k6t5") pod "c4724b2a-c46a-4d7b-8227-6a15e978dd16" (UID: "c4724b2a-c46a-4d7b-8227-6a15e978dd16"). InnerVolumeSpecName "kube-api-access-9k6t5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.115367 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d95nn" event={"ID":"9c50096c-0297-4e96-868c-d34cfc326d46","Type":"ContainerStarted","Data":"039ffeb25ca63f1d93918d2e40332955ab662c49c12cc3d99736d6f4f95a0166"} Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.116414 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.129098 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.132593 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:13 crc kubenswrapper[4766]: E1213 03:49:13.132624 4766 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.212:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-wb4lm.1880a9cd4b79932a openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-wb4lm,UID:16b3386a-f52e-47b8-a0cb-172dd34f4761,APIVersion:v1,ResourceVersion:28017,FieldPath:spec.initContainers{extract-content},},Reason:Started,Message:Started container extract-content,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 03:49:13.128882986 +0000 UTC m=+284.638815950,LastTimestamp:2025-12-13 03:49:13.128882986 +0000 UTC m=+284.638815950,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.133992 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.134739 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.134998 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7x6l" event={"ID":"456b5fc9-3dab-40fc-81c5-ab9ea1f110dd","Type":"ContainerStarted","Data":"07d72fd5d681cfbc9f206f30a484559cf20fbe192fcc3f268f3a47c355e28590"} Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.140342 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kq9t5" event={"ID":"116c9e00-53c9-4d49-813e-2f0dd2b24411","Type":"ContainerStarted","Data":"54b972ade6b3364a8e811a02f31ecc63c93cc4aeeacad3bae53a776af1218d5c"} Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.151761 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dqhc" event={"ID":"f162311d-72df-42b4-b586-7bc1d4945c99","Type":"ContainerStarted","Data":"fcee0a230f0613ad67d95c1d7ca81d2c2c5e09b5a9bc42c7619e84afd08a5295"} Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.155034 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzmqb" event={"ID":"35afc352-db00-48b9-b888-6b8b9bc36403","Type":"ContainerStarted","Data":"70c403e7019271062fd03e40a6c7abf6a1a8ad4d852977e528162bf3334f2c15"} Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.161736 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.162602 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" event={"ID":"c4724b2a-c46a-4d7b-8227-6a15e978dd16","Type":"ContainerDied","Data":"35d7947069747b67558b41cee9306b0b684e6dbc1f3f65188cec7f1934c34bb3"} Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.162925 4766 scope.go:117] "RemoveContainer" containerID="d4e7ed6e0f7fa0185b3d7cdde59f6541e344873cb13cec7d5162a61c808874d8" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.163955 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.164891 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.165184 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.172803 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb4lm" event={"ID":"16b3386a-f52e-47b8-a0cb-172dd34f4761","Type":"ContainerStarted","Data":"649bc64f84bfdf65e43029acae26f1eb8288a1cc79b3bf83fbd7a95fe83282e6"} Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.177660 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.178172 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.178419 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.179308 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.180364 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fsl6d" event={"ID":"cf9aebbf-5f0a-4684-b6fc-a85c909dbb34","Type":"ContainerStarted","Data":"c7f194d8329170c8ebc3429e8fa51099ae65e2ac87c52a3a3a8fbdda69b5ed8a"} Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.181392 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.181772 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.182053 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.182385 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.182778 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.183223 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.183391 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.183661 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.183824 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.183991 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.206306 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h44rs\" (UniqueName: \"kubernetes.io/projected/3f461109-9a3d-4f7a-9265-1db9a81122e5-kube-api-access-h44rs\") pod \"3f461109-9a3d-4f7a-9265-1db9a81122e5\" (UID: \"3f461109-9a3d-4f7a-9265-1db9a81122e5\") " Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.207306 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f461109-9a3d-4f7a-9265-1db9a81122e5-client-ca\") pod \"3f461109-9a3d-4f7a-9265-1db9a81122e5\" (UID: \"3f461109-9a3d-4f7a-9265-1db9a81122e5\") " Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.207356 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f461109-9a3d-4f7a-9265-1db9a81122e5-serving-cert\") pod \"3f461109-9a3d-4f7a-9265-1db9a81122e5\" (UID: \"3f461109-9a3d-4f7a-9265-1db9a81122e5\") " Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.207533 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f461109-9a3d-4f7a-9265-1db9a81122e5-proxy-ca-bundles\") pod \"3f461109-9a3d-4f7a-9265-1db9a81122e5\" (UID: \"3f461109-9a3d-4f7a-9265-1db9a81122e5\") " Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.207571 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f461109-9a3d-4f7a-9265-1db9a81122e5-config\") pod \"3f461109-9a3d-4f7a-9265-1db9a81122e5\" (UID: \"3f461109-9a3d-4f7a-9265-1db9a81122e5\") " Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.207773 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/19086cc5-bade-4f53-90cc-0370b7d2f6a9-v4-0-config-user-template-login\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.207821 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/19086cc5-bade-4f53-90cc-0370b7d2f6a9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.207855 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.207881 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/19086cc5-bade-4f53-90cc-0370b7d2f6a9-audit-dir\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.207920 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/19086cc5-bade-4f53-90cc-0370b7d2f6a9-audit-policies\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.207937 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.207954 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.207976 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/19086cc5-bade-4f53-90cc-0370b7d2f6a9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.207994 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.208029 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz6bv\" (UniqueName: \"kubernetes.io/projected/19086cc5-bade-4f53-90cc-0370b7d2f6a9-kube-api-access-sz6bv\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.208051 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/19086cc5-bade-4f53-90cc-0370b7d2f6a9-v4-0-config-user-template-error\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.208113 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.208133 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/19086cc5-bade-4f53-90cc-0370b7d2f6a9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.208155 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/19086cc5-bade-4f53-90cc-0370b7d2f6a9-v4-0-config-system-service-ca\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.208179 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/19086cc5-bade-4f53-90cc-0370b7d2f6a9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.208198 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19086cc5-bade-4f53-90cc-0370b7d2f6a9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.208220 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.208239 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/19086cc5-bade-4f53-90cc-0370b7d2f6a9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.208261 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/19086cc5-bade-4f53-90cc-0370b7d2f6a9-v4-0-config-system-router-certs\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.208282 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.208311 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.208337 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/19086cc5-bade-4f53-90cc-0370b7d2f6a9-v4-0-config-system-session\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.208378 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4724b2a-c46a-4d7b-8227-6a15e978dd16-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.208389 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4724b2a-c46a-4d7b-8227-6a15e978dd16-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.208400 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4724b2a-c46a-4d7b-8227-6a15e978dd16-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.208410 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9k6t5\" (UniqueName: \"kubernetes.io/projected/c4724b2a-c46a-4d7b-8227-6a15e978dd16-kube-api-access-9k6t5\") on node \"crc\" DevicePath \"\"" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.208561 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.208924 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.208993 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.209126 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/19086cc5-bade-4f53-90cc-0370b7d2f6a9-audit-dir\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.209183 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.209190 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f461109-9a3d-4f7a-9265-1db9a81122e5-client-ca" (OuterVolumeSpecName: "client-ca") pod "3f461109-9a3d-4f7a-9265-1db9a81122e5" (UID: "3f461109-9a3d-4f7a-9265-1db9a81122e5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.209293 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.210502 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/19086cc5-bade-4f53-90cc-0370b7d2f6a9-v4-0-config-system-service-ca\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: E1213 03:49:13.210825 4766 projected.go:194] Error preparing data for projected volume kube-api-access-sz6bv for pod openshift-authentication/oauth-openshift-f6658f7c8-7lmh4: failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift/token": dial tcp 38.102.83.212:6443: connect: connection refused Dec 13 03:49:13 crc kubenswrapper[4766]: E1213 03:49:13.210959 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19086cc5-bade-4f53-90cc-0370b7d2f6a9-kube-api-access-sz6bv podName:19086cc5-bade-4f53-90cc-0370b7d2f6a9 nodeName:}" failed. No retries permitted until 2025-12-13 03:49:13.710894466 +0000 UTC m=+285.220827520 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-sz6bv" (UniqueName: "kubernetes.io/projected/19086cc5-bade-4f53-90cc-0370b7d2f6a9-kube-api-access-sz6bv") pod "oauth-openshift-f6658f7c8-7lmh4" (UID: "19086cc5-bade-4f53-90cc-0370b7d2f6a9") : failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift/token": dial tcp 38.102.83.212:6443: connect: connection refused Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.211220 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.213074 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/19086cc5-bade-4f53-90cc-0370b7d2f6a9-v4-0-config-user-template-error\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.213162 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.213635 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.216526 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/19086cc5-bade-4f53-90cc-0370b7d2f6a9-v4-0-config-user-template-login\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.216554 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19086cc5-bade-4f53-90cc-0370b7d2f6a9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.217125 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/19086cc5-bade-4f53-90cc-0370b7d2f6a9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.218046 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/19086cc5-bade-4f53-90cc-0370b7d2f6a9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.218805 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/19086cc5-bade-4f53-90cc-0370b7d2f6a9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.219897 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f461109-9a3d-4f7a-9265-1db9a81122e5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3f461109-9a3d-4f7a-9265-1db9a81122e5" (UID: "3f461109-9a3d-4f7a-9265-1db9a81122e5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.221072 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/19086cc5-bade-4f53-90cc-0370b7d2f6a9-audit-policies\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.221591 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f461109-9a3d-4f7a-9265-1db9a81122e5-kube-api-access-h44rs" (OuterVolumeSpecName: "kube-api-access-h44rs") pod "3f461109-9a3d-4f7a-9265-1db9a81122e5" (UID: "3f461109-9a3d-4f7a-9265-1db9a81122e5"). InnerVolumeSpecName "kube-api-access-h44rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.221982 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/19086cc5-bade-4f53-90cc-0370b7d2f6a9-v4-0-config-system-router-certs\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.223483 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/19086cc5-bade-4f53-90cc-0370b7d2f6a9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.223701 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f461109-9a3d-4f7a-9265-1db9a81122e5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3f461109-9a3d-4f7a-9265-1db9a81122e5" (UID: "3f461109-9a3d-4f7a-9265-1db9a81122e5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.229996 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/19086cc5-bade-4f53-90cc-0370b7d2f6a9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.230411 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f461109-9a3d-4f7a-9265-1db9a81122e5-config" (OuterVolumeSpecName: "config") pod "3f461109-9a3d-4f7a-9265-1db9a81122e5" (UID: "3f461109-9a3d-4f7a-9265-1db9a81122e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.230841 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/19086cc5-bade-4f53-90cc-0370b7d2f6a9-v4-0-config-system-session\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.310183 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h44rs\" (UniqueName: \"kubernetes.io/projected/3f461109-9a3d-4f7a-9265-1db9a81122e5-kube-api-access-h44rs\") on node \"crc\" DevicePath \"\"" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.310403 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f461109-9a3d-4f7a-9265-1db9a81122e5-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.310413 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f461109-9a3d-4f7a-9265-1db9a81122e5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.310439 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f461109-9a3d-4f7a-9265-1db9a81122e5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.310449 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f461109-9a3d-4f7a-9265-1db9a81122e5-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:49:13 crc kubenswrapper[4766]: E1213 03:49:13.366360 4766 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c50096c_0297_4e96_868c_d34cfc326d46.slice/crio-039ffeb25ca63f1d93918d2e40332955ab662c49c12cc3d99736d6f4f95a0166.scope\": RecentStats: unable to find data in memory cache]" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.373957 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 03:49:13 crc kubenswrapper[4766]: I1213 03:49:13.717158 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz6bv\" (UniqueName: \"kubernetes.io/projected/19086cc5-bade-4f53-90cc-0370b7d2f6a9-kube-api-access-sz6bv\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:13 crc kubenswrapper[4766]: E1213 03:49:13.718493 4766 projected.go:194] Error preparing data for projected volume kube-api-access-sz6bv for pod openshift-authentication/oauth-openshift-f6658f7c8-7lmh4: failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift/token": dial tcp 38.102.83.212:6443: connect: connection refused Dec 13 03:49:13 crc kubenswrapper[4766]: E1213 03:49:13.718583 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19086cc5-bade-4f53-90cc-0370b7d2f6a9-kube-api-access-sz6bv podName:19086cc5-bade-4f53-90cc-0370b7d2f6a9 nodeName:}" failed. No retries permitted until 2025-12-13 03:49:14.718559906 +0000 UTC m=+286.228492870 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-sz6bv" (UniqueName: "kubernetes.io/projected/19086cc5-bade-4f53-90cc-0370b7d2f6a9-kube-api-access-sz6bv") pod "oauth-openshift-f6658f7c8-7lmh4" (UID: "19086cc5-bade-4f53-90cc-0370b7d2f6a9") : failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift/token": dial tcp 38.102.83.212:6443: connect: connection refused Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.188744 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" event={"ID":"3f461109-9a3d-4f7a-9265-1db9a81122e5","Type":"ContainerDied","Data":"93a46cf412c7066851c18ae87582816734ef215bfffd1b3aa511740ddc126fc2"} Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.189080 4766 scope.go:117] "RemoveContainer" containerID="6538a26093144940a504017b776f19137fd882fe7e82e4ab40b6a106c45167e1" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.188775 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.190229 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.190938 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.191258 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.191634 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.191958 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.192336 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.194003 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.194688 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.195042 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.195264 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.195687 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.199611 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.200586 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3c82736d8d5e400aa9f77b6e3481b75b3127380fbfe2ab6b998360b771a41dc8" exitCode=0 Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.200636 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0" exitCode=0 Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.200651 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339" exitCode=2 Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.203750 4766 scope.go:117] "RemoveContainer" containerID="1b82372ac1c99295e1f0f6bb604681e01eaab314d8ea2c89e12a510b3801a4f3" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.207567 4766 generic.go:334] "Generic (PLEG): container finished" podID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" containerID="9c09fafd2652185b304016008d94651e7b185bc694ee212b6429a3a7b31a72da" exitCode=0 Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.207679 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"151a6cc7-14cd-4ac6-a65e-d165c2e8520f","Type":"ContainerDied","Data":"9c09fafd2652185b304016008d94651e7b185bc694ee212b6429a3a7b31a72da"} Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.208652 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.209805 4766 status_manager.go:851] "Failed to get status for pod" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.210372 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.210696 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.210933 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.211161 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.212142 4766 generic.go:334] "Generic (PLEG): container finished" podID="16b3386a-f52e-47b8-a0cb-172dd34f4761" containerID="649bc64f84bfdf65e43029acae26f1eb8288a1cc79b3bf83fbd7a95fe83282e6" exitCode=0 Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.212224 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb4lm" event={"ID":"16b3386a-f52e-47b8-a0cb-172dd34f4761","Type":"ContainerDied","Data":"649bc64f84bfdf65e43029acae26f1eb8288a1cc79b3bf83fbd7a95fe83282e6"} Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.213297 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.213588 4766 status_manager.go:851] "Failed to get status for pod" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.213842 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.214043 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.214253 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.214463 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.214982 4766 generic.go:334] "Generic (PLEG): container finished" podID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" containerID="c7f194d8329170c8ebc3429e8fa51099ae65e2ac87c52a3a3a8fbdda69b5ed8a" exitCode=0 Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.215030 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fsl6d" event={"ID":"cf9aebbf-5f0a-4684-b6fc-a85c909dbb34","Type":"ContainerDied","Data":"c7f194d8329170c8ebc3429e8fa51099ae65e2ac87c52a3a3a8fbdda69b5ed8a"} Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.215852 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.216754 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.216968 4766 status_manager.go:851] "Failed to get status for pod" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.217303 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.217731 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.218044 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"8402de98bd45acbe2043e7d30396adb974048b4794bcdd7ed896493fa7875111"} Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.218157 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"0e70e7d502c3a9db69518f5b33910b3c5d7ed7948d84327399dd82d0675f1fee"} Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.218187 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.218970 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: E1213 03:49:14.218998 4766 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.212:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.219242 4766 status_manager.go:851] "Failed to get status for pod" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.219475 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.219766 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.220000 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.220304 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.221027 4766 generic.go:334] "Generic (PLEG): container finished" podID="f162311d-72df-42b4-b586-7bc1d4945c99" containerID="fcee0a230f0613ad67d95c1d7ca81d2c2c5e09b5a9bc42c7619e84afd08a5295" exitCode=0 Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.221079 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dqhc" event={"ID":"f162311d-72df-42b4-b586-7bc1d4945c99","Type":"ContainerDied","Data":"fcee0a230f0613ad67d95c1d7ca81d2c2c5e09b5a9bc42c7619e84afd08a5295"} Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.222132 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.222294 4766 status_manager.go:851] "Failed to get status for pod" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.222507 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.222721 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.222951 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.223226 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.225044 4766 status_manager.go:851] "Failed to get status for pod" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" pod="openshift-marketplace/community-operators-6dqhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dqhc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.227725 4766 generic.go:334] "Generic (PLEG): container finished" podID="9c50096c-0297-4e96-868c-d34cfc326d46" containerID="039ffeb25ca63f1d93918d2e40332955ab662c49c12cc3d99736d6f4f95a0166" exitCode=0 Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.227833 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d95nn" event={"ID":"9c50096c-0297-4e96-868c-d34cfc326d46","Type":"ContainerDied","Data":"039ffeb25ca63f1d93918d2e40332955ab662c49c12cc3d99736d6f4f95a0166"} Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.229002 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.230011 4766 status_manager.go:851] "Failed to get status for pod" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.230391 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.230621 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.230829 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.231108 4766 status_manager.go:851] "Failed to get status for pod" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" pod="openshift-marketplace/redhat-marketplace-d95nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d95nn\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.231289 4766 status_manager.go:851] "Failed to get status for pod" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" pod="openshift-marketplace/redhat-operators-kq9t5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kq9t5\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.231475 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.231698 4766 status_manager.go:851] "Failed to get status for pod" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" pod="openshift-marketplace/community-operators-6dqhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dqhc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.231868 4766 status_manager.go:851] "Failed to get status for pod" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" pod="openshift-marketplace/certified-operators-hzmqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzmqb\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.232161 4766 status_manager.go:851] "Failed to get status for pod" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.232321 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.232491 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.232661 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.232910 4766 status_manager.go:851] "Failed to get status for pod" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" pod="openshift-marketplace/redhat-marketplace-d95nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d95nn\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.233065 4766 status_manager.go:851] "Failed to get status for pod" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" pod="openshift-marketplace/redhat-operators-kq9t5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kq9t5\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.233227 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.233384 4766 status_manager.go:851] "Failed to get status for pod" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" pod="openshift-marketplace/community-operators-6dqhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dqhc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.233555 4766 status_manager.go:851] "Failed to get status for pod" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" pod="openshift-marketplace/certified-operators-hzmqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzmqb\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.233708 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.234021 4766 status_manager.go:851] "Failed to get status for pod" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" pod="openshift-marketplace/certified-operators-p7x6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p7x6l\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.529501 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kq9t5" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.529589 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kq9t5" Dec 13 03:49:14 crc kubenswrapper[4766]: I1213 03:49:14.736542 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz6bv\" (UniqueName: \"kubernetes.io/projected/19086cc5-bade-4f53-90cc-0370b7d2f6a9-kube-api-access-sz6bv\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:14 crc kubenswrapper[4766]: E1213 03:49:14.737383 4766 projected.go:194] Error preparing data for projected volume kube-api-access-sz6bv for pod openshift-authentication/oauth-openshift-f6658f7c8-7lmh4: failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift/token": dial tcp 38.102.83.212:6443: connect: connection refused Dec 13 03:49:14 crc kubenswrapper[4766]: E1213 03:49:14.737892 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19086cc5-bade-4f53-90cc-0370b7d2f6a9-kube-api-access-sz6bv podName:19086cc5-bade-4f53-90cc-0370b7d2f6a9 nodeName:}" failed. No retries permitted until 2025-12-13 03:49:16.737854863 +0000 UTC m=+288.247787837 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-sz6bv" (UniqueName: "kubernetes.io/projected/19086cc5-bade-4f53-90cc-0370b7d2f6a9-kube-api-access-sz6bv") pod "oauth-openshift-f6658f7c8-7lmh4" (UID: "19086cc5-bade-4f53-90cc-0370b7d2f6a9") : failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift/token": dial tcp 38.102.83.212:6443: connect: connection refused Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.239194 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.243692 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d95nn" event={"ID":"9c50096c-0297-4e96-868c-d34cfc326d46","Type":"ContainerStarted","Data":"c67142ece289f91db175f19e66a1fad7c98085cb680093abbf792615d72a9259"} Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.244579 4766 status_manager.go:851] "Failed to get status for pod" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.245016 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.245413 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.245763 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.245966 4766 status_manager.go:851] "Failed to get status for pod" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" pod="openshift-marketplace/redhat-marketplace-d95nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d95nn\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.246182 4766 status_manager.go:851] "Failed to get status for pod" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" pod="openshift-marketplace/redhat-operators-kq9t5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kq9t5\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.246476 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.246885 4766 status_manager.go:851] "Failed to get status for pod" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" pod="openshift-marketplace/community-operators-6dqhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dqhc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.247366 4766 status_manager.go:851] "Failed to get status for pod" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" pod="openshift-marketplace/certified-operators-hzmqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzmqb\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.247997 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb4lm" event={"ID":"16b3386a-f52e-47b8-a0cb-172dd34f4761","Type":"ContainerStarted","Data":"22a2ff3a7e2f71b6336f0fd3bfe531ed8c59a8ebbed034aa9c7d2c1eebdf1176"} Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.248781 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.249050 4766 status_manager.go:851] "Failed to get status for pod" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" pod="openshift-marketplace/certified-operators-p7x6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p7x6l\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.249484 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.249718 4766 status_manager.go:851] "Failed to get status for pod" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" pod="openshift-marketplace/certified-operators-p7x6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p7x6l\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.250002 4766 status_manager.go:851] "Failed to get status for pod" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.250307 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.250554 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.250727 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.251267 4766 status_manager.go:851] "Failed to get status for pod" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" pod="openshift-marketplace/redhat-marketplace-d95nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d95nn\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.251807 4766 status_manager.go:851] "Failed to get status for pod" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" pod="openshift-marketplace/redhat-operators-kq9t5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kq9t5\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.252135 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.252398 4766 status_manager.go:851] "Failed to get status for pod" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" pod="openshift-marketplace/community-operators-6dqhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dqhc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.252639 4766 status_manager.go:851] "Failed to get status for pod" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" pod="openshift-marketplace/certified-operators-hzmqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzmqb\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.252663 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fsl6d" event={"ID":"cf9aebbf-5f0a-4684-b6fc-a85c909dbb34","Type":"ContainerStarted","Data":"d1d0f79669b994734c82a0410f431e7a31dd43f0692e633ff3283ff6fe979d2d"} Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.253767 4766 status_manager.go:851] "Failed to get status for pod" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" pod="openshift-marketplace/certified-operators-hzmqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzmqb\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.254252 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.254815 4766 status_manager.go:851] "Failed to get status for pod" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" pod="openshift-marketplace/certified-operators-p7x6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p7x6l\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.255572 4766 status_manager.go:851] "Failed to get status for pod" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.255873 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.256146 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.256923 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.257127 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dqhc" event={"ID":"f162311d-72df-42b4-b586-7bc1d4945c99","Type":"ContainerStarted","Data":"332d60252bbe8ac4131f2c43d4dc0e44c5f341ac1dfd02ec4e9de9da98877e1d"} Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.260879 4766 status_manager.go:851] "Failed to get status for pod" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" pod="openshift-marketplace/redhat-marketplace-d95nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d95nn\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.262420 4766 status_manager.go:851] "Failed to get status for pod" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" pod="openshift-marketplace/redhat-operators-kq9t5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kq9t5\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.262832 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.264933 4766 status_manager.go:851] "Failed to get status for pod" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" pod="openshift-marketplace/community-operators-6dqhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dqhc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.265628 4766 status_manager.go:851] "Failed to get status for pod" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" pod="openshift-marketplace/community-operators-6dqhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dqhc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.266132 4766 status_manager.go:851] "Failed to get status for pod" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" pod="openshift-marketplace/certified-operators-hzmqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzmqb\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.266409 4766 status_manager.go:851] "Failed to get status for pod" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" pod="openshift-marketplace/certified-operators-p7x6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p7x6l\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.266659 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.266869 4766 status_manager.go:851] "Failed to get status for pod" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.267092 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.267313 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.267602 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.267975 4766 status_manager.go:851] "Failed to get status for pod" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" pod="openshift-marketplace/redhat-marketplace-d95nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d95nn\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.268211 4766 status_manager.go:851] "Failed to get status for pod" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" pod="openshift-marketplace/redhat-operators-kq9t5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kq9t5\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.268462 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.525059 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.526262 4766 status_manager.go:851] "Failed to get status for pod" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.526645 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.526977 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.527281 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.527653 4766 status_manager.go:851] "Failed to get status for pod" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" pod="openshift-marketplace/redhat-marketplace-d95nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d95nn\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.527905 4766 status_manager.go:851] "Failed to get status for pod" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" pod="openshift-marketplace/redhat-operators-kq9t5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kq9t5\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.528931 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.529307 4766 status_manager.go:851] "Failed to get status for pod" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" pod="openshift-marketplace/community-operators-6dqhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dqhc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.529555 4766 status_manager.go:851] "Failed to get status for pod" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" pod="openshift-marketplace/certified-operators-hzmqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzmqb\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.529854 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.530089 4766 status_manager.go:851] "Failed to get status for pod" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" pod="openshift-marketplace/certified-operators-p7x6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p7x6l\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.574321 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kq9t5" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" containerName="registry-server" probeResult="failure" output=< Dec 13 03:49:15 crc kubenswrapper[4766]: timeout: failed to connect service ":50051" within 1s Dec 13 03:49:15 crc kubenswrapper[4766]: > Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.651947 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/151a6cc7-14cd-4ac6-a65e-d165c2e8520f-kube-api-access\") pod \"151a6cc7-14cd-4ac6-a65e-d165c2e8520f\" (UID: \"151a6cc7-14cd-4ac6-a65e-d165c2e8520f\") " Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.652002 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/151a6cc7-14cd-4ac6-a65e-d165c2e8520f-var-lock\") pod \"151a6cc7-14cd-4ac6-a65e-d165c2e8520f\" (UID: \"151a6cc7-14cd-4ac6-a65e-d165c2e8520f\") " Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.652055 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/151a6cc7-14cd-4ac6-a65e-d165c2e8520f-kubelet-dir\") pod \"151a6cc7-14cd-4ac6-a65e-d165c2e8520f\" (UID: \"151a6cc7-14cd-4ac6-a65e-d165c2e8520f\") " Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.652128 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/151a6cc7-14cd-4ac6-a65e-d165c2e8520f-var-lock" (OuterVolumeSpecName: "var-lock") pod "151a6cc7-14cd-4ac6-a65e-d165c2e8520f" (UID: "151a6cc7-14cd-4ac6-a65e-d165c2e8520f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.652287 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/151a6cc7-14cd-4ac6-a65e-d165c2e8520f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "151a6cc7-14cd-4ac6-a65e-d165c2e8520f" (UID: "151a6cc7-14cd-4ac6-a65e-d165c2e8520f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.652382 4766 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/151a6cc7-14cd-4ac6-a65e-d165c2e8520f-var-lock\") on node \"crc\" DevicePath \"\"" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.657634 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/151a6cc7-14cd-4ac6-a65e-d165c2e8520f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "151a6cc7-14cd-4ac6-a65e-d165c2e8520f" (UID: "151a6cc7-14cd-4ac6-a65e-d165c2e8520f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.753238 4766 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/151a6cc7-14cd-4ac6-a65e-d165c2e8520f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 13 03:49:15 crc kubenswrapper[4766]: I1213 03:49:15.753284 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/151a6cc7-14cd-4ac6-a65e-d165c2e8520f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 13 03:49:16 crc kubenswrapper[4766]: I1213 03:49:16.363604 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"151a6cc7-14cd-4ac6-a65e-d165c2e8520f","Type":"ContainerDied","Data":"4744b134aa296efc3655e6173ed1a92a19e21a686c4f2cae46d1d1ca1858dddf"} Dec 13 03:49:16 crc kubenswrapper[4766]: I1213 03:49:16.363682 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4744b134aa296efc3655e6173ed1a92a19e21a686c4f2cae46d1d1ca1858dddf" Dec 13 03:49:16 crc kubenswrapper[4766]: I1213 03:49:16.363759 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 13 03:49:16 crc kubenswrapper[4766]: I1213 03:49:16.377421 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 13 03:49:16 crc kubenswrapper[4766]: I1213 03:49:16.378774 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c" exitCode=0 Dec 13 03:49:16 crc kubenswrapper[4766]: I1213 03:49:16.387362 4766 status_manager.go:851] "Failed to get status for pod" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" pod="openshift-marketplace/redhat-operators-kq9t5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kq9t5\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:16 crc kubenswrapper[4766]: I1213 03:49:16.387646 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:16 crc kubenswrapper[4766]: I1213 03:49:16.387878 4766 status_manager.go:851] "Failed to get status for pod" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" pod="openshift-marketplace/community-operators-6dqhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dqhc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:16 crc kubenswrapper[4766]: I1213 03:49:16.388264 4766 status_manager.go:851] "Failed to get status for pod" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" pod="openshift-marketplace/certified-operators-hzmqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzmqb\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:16 crc kubenswrapper[4766]: I1213 03:49:16.388626 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:16 crc kubenswrapper[4766]: I1213 03:49:16.389061 4766 status_manager.go:851] "Failed to get status for pod" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" pod="openshift-marketplace/certified-operators-p7x6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p7x6l\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:16 crc kubenswrapper[4766]: I1213 03:49:16.389396 4766 status_manager.go:851] "Failed to get status for pod" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:16 crc kubenswrapper[4766]: I1213 03:49:16.389727 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:16 crc kubenswrapper[4766]: I1213 03:49:16.389976 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:16 crc kubenswrapper[4766]: I1213 03:49:16.390257 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:16 crc kubenswrapper[4766]: I1213 03:49:16.395772 4766 status_manager.go:851] "Failed to get status for pod" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" pod="openshift-marketplace/redhat-marketplace-d95nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d95nn\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:16 crc kubenswrapper[4766]: I1213 03:49:16.743908 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz6bv\" (UniqueName: \"kubernetes.io/projected/19086cc5-bade-4f53-90cc-0370b7d2f6a9-kube-api-access-sz6bv\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:16 crc kubenswrapper[4766]: E1213 03:49:16.744927 4766 projected.go:194] Error preparing data for projected volume kube-api-access-sz6bv for pod openshift-authentication/oauth-openshift-f6658f7c8-7lmh4: failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift/token": dial tcp 38.102.83.212:6443: connect: connection refused Dec 13 03:49:16 crc kubenswrapper[4766]: E1213 03:49:16.744998 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19086cc5-bade-4f53-90cc-0370b7d2f6a9-kube-api-access-sz6bv podName:19086cc5-bade-4f53-90cc-0370b7d2f6a9 nodeName:}" failed. No retries permitted until 2025-12-13 03:49:20.744979716 +0000 UTC m=+292.254912680 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-sz6bv" (UniqueName: "kubernetes.io/projected/19086cc5-bade-4f53-90cc-0370b7d2f6a9-kube-api-access-sz6bv") pod "oauth-openshift-f6658f7c8-7lmh4" (UID: "19086cc5-bade-4f53-90cc-0370b7d2f6a9") : failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift/token": dial tcp 38.102.83.212:6443: connect: connection refused Dec 13 03:49:18 crc kubenswrapper[4766]: E1213 03:49:18.034528 4766 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:18 crc kubenswrapper[4766]: E1213 03:49:18.035193 4766 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:18 crc kubenswrapper[4766]: E1213 03:49:18.035618 4766 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:18 crc kubenswrapper[4766]: E1213 03:49:18.035895 4766 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:18 crc kubenswrapper[4766]: E1213 03:49:18.036354 4766 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:18 crc kubenswrapper[4766]: I1213 03:49:18.036449 4766 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 13 03:49:18 crc kubenswrapper[4766]: E1213 03:49:18.037003 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" interval="200ms" Dec 13 03:49:18 crc kubenswrapper[4766]: E1213 03:49:18.238167 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" interval="400ms" Dec 13 03:49:18 crc kubenswrapper[4766]: I1213 03:49:18.400383 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 13 03:49:18 crc kubenswrapper[4766]: I1213 03:49:18.402850 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d" exitCode=0 Dec 13 03:49:18 crc kubenswrapper[4766]: E1213 03:49:18.639440 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" interval="800ms" Dec 13 03:49:19 crc kubenswrapper[4766]: E1213 03:49:19.440941 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" interval="1.6s" Dec 13 03:49:19 crc kubenswrapper[4766]: I1213 03:49:19.618865 4766 status_manager.go:851] "Failed to get status for pod" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:19 crc kubenswrapper[4766]: I1213 03:49:19.619350 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:19 crc kubenswrapper[4766]: I1213 03:49:19.620063 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:19 crc kubenswrapper[4766]: I1213 03:49:19.620352 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:19 crc kubenswrapper[4766]: I1213 03:49:19.620617 4766 status_manager.go:851] "Failed to get status for pod" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" pod="openshift-marketplace/redhat-marketplace-d95nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d95nn\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:19 crc kubenswrapper[4766]: I1213 03:49:19.620911 4766 status_manager.go:851] "Failed to get status for pod" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" pod="openshift-marketplace/redhat-operators-kq9t5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kq9t5\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:19 crc kubenswrapper[4766]: I1213 03:49:19.621284 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:19 crc kubenswrapper[4766]: I1213 03:49:19.621554 4766 status_manager.go:851] "Failed to get status for pod" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" pod="openshift-marketplace/community-operators-6dqhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dqhc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:19 crc kubenswrapper[4766]: I1213 03:49:19.621852 4766 status_manager.go:851] "Failed to get status for pod" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" pod="openshift-marketplace/certified-operators-hzmqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzmqb\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:19 crc kubenswrapper[4766]: I1213 03:49:19.622203 4766 status_manager.go:851] "Failed to get status for pod" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" pod="openshift-marketplace/certified-operators-p7x6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p7x6l\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:19 crc kubenswrapper[4766]: I1213 03:49:19.622408 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:19 crc kubenswrapper[4766]: I1213 03:49:19.876903 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6dqhc" Dec 13 03:49:19 crc kubenswrapper[4766]: I1213 03:49:19.877164 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6dqhc" Dec 13 03:49:19 crc kubenswrapper[4766]: I1213 03:49:19.919630 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6dqhc" Dec 13 03:49:19 crc kubenswrapper[4766]: I1213 03:49:19.920350 4766 status_manager.go:851] "Failed to get status for pod" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" pod="openshift-marketplace/redhat-operators-kq9t5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kq9t5\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:19 crc kubenswrapper[4766]: I1213 03:49:19.921052 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:19 crc kubenswrapper[4766]: I1213 03:49:19.921836 4766 status_manager.go:851] "Failed to get status for pod" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" pod="openshift-marketplace/community-operators-6dqhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dqhc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:19 crc kubenswrapper[4766]: I1213 03:49:19.922156 4766 status_manager.go:851] "Failed to get status for pod" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" pod="openshift-marketplace/certified-operators-hzmqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzmqb\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:19 crc kubenswrapper[4766]: I1213 03:49:19.922454 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:19 crc kubenswrapper[4766]: I1213 03:49:19.922751 4766 status_manager.go:851] "Failed to get status for pod" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" pod="openshift-marketplace/certified-operators-p7x6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p7x6l\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:19 crc kubenswrapper[4766]: I1213 03:49:19.923008 4766 status_manager.go:851] "Failed to get status for pod" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:19 crc kubenswrapper[4766]: I1213 03:49:19.923289 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:19 crc kubenswrapper[4766]: I1213 03:49:19.923555 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:19 crc kubenswrapper[4766]: I1213 03:49:19.923809 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:19 crc kubenswrapper[4766]: I1213 03:49:19.924073 4766 status_manager.go:851] "Failed to get status for pod" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" pod="openshift-marketplace/redhat-marketplace-d95nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d95nn\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:19 crc kubenswrapper[4766]: I1213 03:49:19.991282 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p7x6l" Dec 13 03:49:19 crc kubenswrapper[4766]: I1213 03:49:19.991359 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p7x6l" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.034086 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p7x6l" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.034899 4766 status_manager.go:851] "Failed to get status for pod" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" pod="openshift-marketplace/community-operators-6dqhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dqhc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.035783 4766 status_manager.go:851] "Failed to get status for pod" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" pod="openshift-marketplace/certified-operators-hzmqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzmqb\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.036417 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.036728 4766 status_manager.go:851] "Failed to get status for pod" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" pod="openshift-marketplace/certified-operators-p7x6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p7x6l\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.037040 4766 status_manager.go:851] "Failed to get status for pod" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.037361 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.037666 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.038146 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.038463 4766 status_manager.go:851] "Failed to get status for pod" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" pod="openshift-marketplace/redhat-marketplace-d95nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d95nn\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.038704 4766 status_manager.go:851] "Failed to get status for pod" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" pod="openshift-marketplace/redhat-operators-kq9t5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kq9t5\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.038953 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: E1213 03:49:20.126848 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:49:20Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:49:20Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:49:20Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T03:49:20Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:b295ecbeb669e7fe4668a62e3b5a215e25e76f8847d56b8dded02988a94e4aba\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:f1ca2393659be90fe7121a54bcc3015ffc91c7c5830c6f71f698446b715a6ab3\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1634292050},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:2f1b1352e6ba3c65c9aa543bf60c5a6090358765bd23a18ae7f7e8b4b5111ad3\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:d3ca2283ab12f3e0c5f4c4eb8c80c6d41f811c351343e87453cbe0bc4eaac5ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1236534630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[],\\\"sizeBytes\\\":1152844048},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: E1213 03:49:20.127645 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: E1213 03:49:20.128129 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: E1213 03:49:20.128424 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: E1213 03:49:20.128746 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: E1213 03:49:20.128777 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.130998 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hzmqb" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.131038 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hzmqb" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.174501 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hzmqb" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.175462 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.176264 4766 status_manager.go:851] "Failed to get status for pod" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" pod="openshift-marketplace/redhat-marketplace-d95nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d95nn\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.176924 4766 status_manager.go:851] "Failed to get status for pod" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" pod="openshift-marketplace/redhat-operators-kq9t5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kq9t5\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.177269 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.177742 4766 status_manager.go:851] "Failed to get status for pod" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" pod="openshift-marketplace/community-operators-6dqhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dqhc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.178066 4766 status_manager.go:851] "Failed to get status for pod" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" pod="openshift-marketplace/certified-operators-hzmqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzmqb\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.178387 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.178824 4766 status_manager.go:851] "Failed to get status for pod" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" pod="openshift-marketplace/certified-operators-p7x6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p7x6l\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.179163 4766 status_manager.go:851] "Failed to get status for pod" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.179526 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.179850 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.416950 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wb4lm" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.417016 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wb4lm" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.454846 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p7x6l" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.457944 4766 status_manager.go:851] "Failed to get status for pod" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" pod="openshift-marketplace/redhat-operators-kq9t5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kq9t5\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.458265 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.458488 4766 status_manager.go:851] "Failed to get status for pod" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" pod="openshift-marketplace/community-operators-6dqhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dqhc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.459014 4766 status_manager.go:851] "Failed to get status for pod" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" pod="openshift-marketplace/certified-operators-hzmqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzmqb\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.459563 4766 status_manager.go:851] "Failed to get status for pod" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" pod="openshift-marketplace/certified-operators-p7x6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p7x6l\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.460027 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.460587 4766 status_manager.go:851] "Failed to get status for pod" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.460958 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.461252 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.461500 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.461519 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wb4lm" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.462189 4766 status_manager.go:851] "Failed to get status for pod" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" pod="openshift-marketplace/redhat-marketplace-d95nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d95nn\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.462712 4766 status_manager.go:851] "Failed to get status for pod" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.463059 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.463846 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.464178 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.464562 4766 status_manager.go:851] "Failed to get status for pod" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" pod="openshift-marketplace/redhat-marketplace-d95nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d95nn\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.464864 4766 status_manager.go:851] "Failed to get status for pod" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" pod="openshift-marketplace/redhat-operators-kq9t5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kq9t5\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.465144 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.465506 4766 status_manager.go:851] "Failed to get status for pod" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" pod="openshift-marketplace/community-operators-6dqhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dqhc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.465863 4766 status_manager.go:851] "Failed to get status for pod" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" pod="openshift-marketplace/certified-operators-hzmqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzmqb\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.466188 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.466567 4766 status_manager.go:851] "Failed to get status for pod" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" pod="openshift-marketplace/certified-operators-p7x6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p7x6l\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.473512 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hzmqb" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.474160 4766 status_manager.go:851] "Failed to get status for pod" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" pod="openshift-marketplace/redhat-operators-kq9t5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kq9t5\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.474535 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.474911 4766 status_manager.go:851] "Failed to get status for pod" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" pod="openshift-marketplace/community-operators-6dqhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dqhc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.475136 4766 status_manager.go:851] "Failed to get status for pod" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" pod="openshift-marketplace/certified-operators-hzmqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzmqb\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.475340 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.475845 4766 status_manager.go:851] "Failed to get status for pod" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" pod="openshift-marketplace/certified-operators-p7x6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p7x6l\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.476517 4766 status_manager.go:851] "Failed to get status for pod" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.476866 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.477291 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.477661 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.478180 4766 status_manager.go:851] "Failed to get status for pod" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" pod="openshift-marketplace/redhat-marketplace-d95nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d95nn\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.509246 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6dqhc" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.511478 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.511974 4766 status_manager.go:851] "Failed to get status for pod" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" pod="openshift-marketplace/redhat-marketplace-d95nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d95nn\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.512384 4766 status_manager.go:851] "Failed to get status for pod" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" pod="openshift-marketplace/redhat-operators-kq9t5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kq9t5\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.512770 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.513109 4766 status_manager.go:851] "Failed to get status for pod" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" pod="openshift-marketplace/community-operators-6dqhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dqhc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.513571 4766 status_manager.go:851] "Failed to get status for pod" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" pod="openshift-marketplace/certified-operators-hzmqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzmqb\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.514002 4766 status_manager.go:851] "Failed to get status for pod" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" pod="openshift-marketplace/certified-operators-p7x6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p7x6l\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.514367 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.514750 4766 status_manager.go:851] "Failed to get status for pod" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.515084 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.515471 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.748022 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz6bv\" (UniqueName: \"kubernetes.io/projected/19086cc5-bade-4f53-90cc-0370b7d2f6a9-kube-api-access-sz6bv\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:20 crc kubenswrapper[4766]: E1213 03:49:20.749076 4766 projected.go:194] Error preparing data for projected volume kube-api-access-sz6bv for pod openshift-authentication/oauth-openshift-f6658f7c8-7lmh4: failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift/token": dial tcp 38.102.83.212:6443: connect: connection refused Dec 13 03:49:20 crc kubenswrapper[4766]: E1213 03:49:20.749177 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19086cc5-bade-4f53-90cc-0370b7d2f6a9-kube-api-access-sz6bv podName:19086cc5-bade-4f53-90cc-0370b7d2f6a9 nodeName:}" failed. No retries permitted until 2025-12-13 03:49:28.749147776 +0000 UTC m=+300.259080750 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-sz6bv" (UniqueName: "kubernetes.io/projected/19086cc5-bade-4f53-90cc-0370b7d2f6a9-kube-api-access-sz6bv") pod "oauth-openshift-f6658f7c8-7lmh4" (UID: "19086cc5-bade-4f53-90cc-0370b7d2f6a9") : failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift/token": dial tcp 38.102.83.212:6443: connect: connection refused Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.817979 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.819016 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.819786 4766 status_manager.go:851] "Failed to get status for pod" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" pod="openshift-marketplace/community-operators-6dqhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dqhc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.820140 4766 status_manager.go:851] "Failed to get status for pod" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" pod="openshift-marketplace/certified-operators-hzmqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzmqb\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.820651 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.820980 4766 status_manager.go:851] "Failed to get status for pod" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" pod="openshift-marketplace/certified-operators-p7x6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p7x6l\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.821496 4766 status_manager.go:851] "Failed to get status for pod" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.821782 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.822049 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.822357 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.822615 4766 status_manager.go:851] "Failed to get status for pod" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" pod="openshift-marketplace/redhat-marketplace-d95nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d95nn\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.822885 4766 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.823204 4766 status_manager.go:851] "Failed to get status for pod" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" pod="openshift-marketplace/redhat-operators-kq9t5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kq9t5\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.823512 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.849510 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.849595 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.849669 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.849663 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.849691 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.849822 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.849998 4766 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.850017 4766 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 13 03:49:20 crc kubenswrapper[4766]: I1213 03:49:20.850031 4766 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 13 03:49:21 crc kubenswrapper[4766]: E1213 03:49:21.042605 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" interval="3.2s" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.422161 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.423017 4766 scope.go:117] "RemoveContainer" containerID="3c82736d8d5e400aa9f77b6e3481b75b3127380fbfe2ab6b998360b771a41dc8" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.423341 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.437781 4766 status_manager.go:851] "Failed to get status for pod" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" pod="openshift-marketplace/redhat-operators-kq9t5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kq9t5\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.438687 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.439250 4766 status_manager.go:851] "Failed to get status for pod" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" pod="openshift-marketplace/community-operators-6dqhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dqhc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.439754 4766 status_manager.go:851] "Failed to get status for pod" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" pod="openshift-marketplace/certified-operators-hzmqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzmqb\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.440233 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.440575 4766 status_manager.go:851] "Failed to get status for pod" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" pod="openshift-marketplace/certified-operators-p7x6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p7x6l\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.441072 4766 status_manager.go:851] "Failed to get status for pod" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.441547 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.441836 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.442071 4766 scope.go:117] "RemoveContainer" containerID="80bfa61fe2ffec880e0bf91a6fd8ec87e214e349e6dd9a18891ad83a6dbd11a0" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.442116 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.442376 4766 status_manager.go:851] "Failed to get status for pod" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" pod="openshift-marketplace/redhat-marketplace-d95nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d95nn\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.442670 4766 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.468299 4766 scope.go:117] "RemoveContainer" containerID="6dfc27f3904d38a049060b79b73a6188621efba1747a08f49b10ecdbe96d748d" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.473512 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wb4lm" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.474295 4766 status_manager.go:851] "Failed to get status for pod" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" pod="openshift-marketplace/redhat-operators-kq9t5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kq9t5\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.474827 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.475142 4766 status_manager.go:851] "Failed to get status for pod" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" pod="openshift-marketplace/community-operators-6dqhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dqhc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.475411 4766 status_manager.go:851] "Failed to get status for pod" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" pod="openshift-marketplace/certified-operators-hzmqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzmqb\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.475704 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.475961 4766 status_manager.go:851] "Failed to get status for pod" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" pod="openshift-marketplace/certified-operators-p7x6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p7x6l\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.476231 4766 status_manager.go:851] "Failed to get status for pod" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.476732 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.477055 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.477653 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.477913 4766 status_manager.go:851] "Failed to get status for pod" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" pod="openshift-marketplace/redhat-marketplace-d95nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d95nn\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.478139 4766 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.484918 4766 scope.go:117] "RemoveContainer" containerID="60e31f322c3b371b85cc7cd5aa9cd58971843ad03d616b7dbdfb6f6ddf270339" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.498078 4766 scope.go:117] "RemoveContainer" containerID="45df467265a3831269f13b029fa57c2c39ec547e329018e4c45a962c792ad04c" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.513064 4766 scope.go:117] "RemoveContainer" containerID="6a92972d5284d10427935c1c77884dd721507f4d7a8402f535f55de595016cf1" Dec 13 03:49:21 crc kubenswrapper[4766]: I1213 03:49:21.625303 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 13 03:49:21 crc kubenswrapper[4766]: E1213 03:49:21.627367 4766 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.212:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-wb4lm.1880a9cd4b79932a openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-wb4lm,UID:16b3386a-f52e-47b8-a0cb-172dd34f4761,APIVersion:v1,ResourceVersion:28017,FieldPath:spec.initContainers{extract-content},},Reason:Started,Message:Started container extract-content,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 03:49:13.128882986 +0000 UTC m=+284.638815950,LastTimestamp:2025-12-13 03:49:13.128882986 +0000 UTC m=+284.638815950,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 03:49:23 crc kubenswrapper[4766]: I1213 03:49:23.747353 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fsl6d" Dec 13 03:49:23 crc kubenswrapper[4766]: I1213 03:49:23.747739 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fsl6d" Dec 13 03:49:23 crc kubenswrapper[4766]: I1213 03:49:23.760882 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d95nn" Dec 13 03:49:23 crc kubenswrapper[4766]: I1213 03:49:23.760957 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d95nn" Dec 13 03:49:23 crc kubenswrapper[4766]: I1213 03:49:23.793294 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fsl6d" Dec 13 03:49:23 crc kubenswrapper[4766]: I1213 03:49:23.793385 4766 status_manager.go:851] "Failed to get status for pod" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" pod="openshift-marketplace/community-operators-6dqhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dqhc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:23 crc kubenswrapper[4766]: I1213 03:49:23.793819 4766 status_manager.go:851] "Failed to get status for pod" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" pod="openshift-marketplace/certified-operators-hzmqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzmqb\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:23 crc kubenswrapper[4766]: I1213 03:49:23.794788 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:23 crc kubenswrapper[4766]: I1213 03:49:23.795110 4766 status_manager.go:851] "Failed to get status for pod" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" pod="openshift-marketplace/certified-operators-p7x6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p7x6l\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:23 crc kubenswrapper[4766]: I1213 03:49:23.795359 4766 status_manager.go:851] "Failed to get status for pod" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:23 crc kubenswrapper[4766]: I1213 03:49:23.795600 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:23 crc kubenswrapper[4766]: I1213 03:49:23.795815 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:23 crc kubenswrapper[4766]: I1213 03:49:23.796022 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:23 crc kubenswrapper[4766]: I1213 03:49:23.796220 4766 status_manager.go:851] "Failed to get status for pod" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" pod="openshift-marketplace/redhat-marketplace-d95nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d95nn\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:23 crc kubenswrapper[4766]: I1213 03:49:23.796561 4766 status_manager.go:851] "Failed to get status for pod" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" pod="openshift-marketplace/redhat-operators-kq9t5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kq9t5\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:23 crc kubenswrapper[4766]: I1213 03:49:23.796903 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:23 crc kubenswrapper[4766]: I1213 03:49:23.804285 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d95nn" Dec 13 03:49:23 crc kubenswrapper[4766]: I1213 03:49:23.804917 4766 status_manager.go:851] "Failed to get status for pod" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" pod="openshift-marketplace/certified-operators-hzmqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzmqb\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:23 crc kubenswrapper[4766]: I1213 03:49:23.805310 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:23 crc kubenswrapper[4766]: I1213 03:49:23.805721 4766 status_manager.go:851] "Failed to get status for pod" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" pod="openshift-marketplace/certified-operators-p7x6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p7x6l\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:23 crc kubenswrapper[4766]: I1213 03:49:23.805999 4766 status_manager.go:851] "Failed to get status for pod" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:23 crc kubenswrapper[4766]: I1213 03:49:23.806331 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:23 crc kubenswrapper[4766]: I1213 03:49:23.806632 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:23 crc kubenswrapper[4766]: I1213 03:49:23.806859 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:23 crc kubenswrapper[4766]: I1213 03:49:23.807048 4766 status_manager.go:851] "Failed to get status for pod" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" pod="openshift-marketplace/redhat-marketplace-d95nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d95nn\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:23 crc kubenswrapper[4766]: I1213 03:49:23.807345 4766 status_manager.go:851] "Failed to get status for pod" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" pod="openshift-marketplace/redhat-operators-kq9t5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kq9t5\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:23 crc kubenswrapper[4766]: I1213 03:49:23.807672 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:23 crc kubenswrapper[4766]: I1213 03:49:23.807972 4766 status_manager.go:851] "Failed to get status for pod" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" pod="openshift-marketplace/community-operators-6dqhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dqhc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: E1213 03:49:24.243626 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.212:6443: connect: connection refused" interval="6.4s" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.477834 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fsl6d" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.478575 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.478818 4766 status_manager.go:851] "Failed to get status for pod" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" pod="openshift-marketplace/certified-operators-p7x6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p7x6l\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.479069 4766 status_manager.go:851] "Failed to get status for pod" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.479591 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.479841 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.480060 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.480482 4766 status_manager.go:851] "Failed to get status for pod" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" pod="openshift-marketplace/redhat-marketplace-d95nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d95nn\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.480821 4766 status_manager.go:851] "Failed to get status for pod" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" pod="openshift-marketplace/redhat-operators-kq9t5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kq9t5\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.481073 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.481168 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d95nn" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.481294 4766 status_manager.go:851] "Failed to get status for pod" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" pod="openshift-marketplace/community-operators-6dqhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dqhc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.481625 4766 status_manager.go:851] "Failed to get status for pod" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" pod="openshift-marketplace/certified-operators-hzmqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzmqb\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.481996 4766 status_manager.go:851] "Failed to get status for pod" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" pod="openshift-marketplace/certified-operators-p7x6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p7x6l\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.482192 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.482359 4766 status_manager.go:851] "Failed to get status for pod" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.482627 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.482823 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.482981 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.483140 4766 status_manager.go:851] "Failed to get status for pod" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" pod="openshift-marketplace/redhat-marketplace-d95nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d95nn\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.483295 4766 status_manager.go:851] "Failed to get status for pod" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" pod="openshift-marketplace/redhat-operators-kq9t5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kq9t5\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.483470 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.483715 4766 status_manager.go:851] "Failed to get status for pod" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" pod="openshift-marketplace/community-operators-6dqhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dqhc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.484015 4766 status_manager.go:851] "Failed to get status for pod" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" pod="openshift-marketplace/certified-operators-hzmqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzmqb\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.566652 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kq9t5" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.567301 4766 status_manager.go:851] "Failed to get status for pod" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" pod="openshift-marketplace/certified-operators-hzmqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzmqb\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.567675 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.568198 4766 status_manager.go:851] "Failed to get status for pod" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" pod="openshift-marketplace/certified-operators-p7x6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p7x6l\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.568490 4766 status_manager.go:851] "Failed to get status for pod" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.568703 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.568981 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.569196 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.569407 4766 status_manager.go:851] "Failed to get status for pod" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" pod="openshift-marketplace/redhat-marketplace-d95nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d95nn\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.569716 4766 status_manager.go:851] "Failed to get status for pod" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" pod="openshift-marketplace/redhat-operators-kq9t5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kq9t5\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.570042 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.570352 4766 status_manager.go:851] "Failed to get status for pod" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" pod="openshift-marketplace/community-operators-6dqhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dqhc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.602062 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kq9t5" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.602833 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.603216 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.603875 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.604236 4766 status_manager.go:851] "Failed to get status for pod" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" pod="openshift-marketplace/redhat-marketplace-d95nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d95nn\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.604691 4766 status_manager.go:851] "Failed to get status for pod" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" pod="openshift-marketplace/redhat-operators-kq9t5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kq9t5\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.605038 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.605412 4766 status_manager.go:851] "Failed to get status for pod" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" pod="openshift-marketplace/community-operators-6dqhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dqhc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.605743 4766 status_manager.go:851] "Failed to get status for pod" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" pod="openshift-marketplace/certified-operators-hzmqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzmqb\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.605951 4766 status_manager.go:851] "Failed to get status for pod" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" pod="openshift-marketplace/certified-operators-p7x6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p7x6l\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.606186 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.606489 4766 status_manager.go:851] "Failed to get status for pod" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.616107 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.616866 4766 status_manager.go:851] "Failed to get status for pod" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" pod="openshift-marketplace/certified-operators-hzmqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzmqb\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.617290 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.617875 4766 status_manager.go:851] "Failed to get status for pod" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" pod="openshift-marketplace/certified-operators-p7x6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p7x6l\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.618351 4766 status_manager.go:851] "Failed to get status for pod" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.618894 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.619252 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.619718 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.620060 4766 status_manager.go:851] "Failed to get status for pod" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" pod="openshift-marketplace/redhat-marketplace-d95nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d95nn\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.620345 4766 status_manager.go:851] "Failed to get status for pod" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" pod="openshift-marketplace/redhat-operators-kq9t5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kq9t5\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.620724 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.621006 4766 status_manager.go:851] "Failed to get status for pod" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" pod="openshift-marketplace/community-operators-6dqhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dqhc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.633081 4766 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bb2779e9-fd47-4ea6-a75c-b0c24339b1c5" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.633145 4766 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bb2779e9-fd47-4ea6-a75c-b0c24339b1c5" Dec 13 03:49:24 crc kubenswrapper[4766]: E1213 03:49:24.634124 4766 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:49:24 crc kubenswrapper[4766]: I1213 03:49:24.634821 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:49:24 crc kubenswrapper[4766]: W1213 03:49:24.662292 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-af3e371ae7f26ecc236fb92950b44925fe1b9de85677972ae2a7fcb4810062bb WatchSource:0}: Error finding container af3e371ae7f26ecc236fb92950b44925fe1b9de85677972ae2a7fcb4810062bb: Status 404 returned error can't find the container with id af3e371ae7f26ecc236fb92950b44925fe1b9de85677972ae2a7fcb4810062bb Dec 13 03:49:25 crc kubenswrapper[4766]: I1213 03:49:25.451022 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"af3e371ae7f26ecc236fb92950b44925fe1b9de85677972ae2a7fcb4810062bb"} Dec 13 03:49:27 crc kubenswrapper[4766]: I1213 03:49:27.466541 4766 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="9acce836a761ed8d0388ba6581b978d088261b2857d3d910534ddca9615db892" exitCode=0 Dec 13 03:49:27 crc kubenswrapper[4766]: I1213 03:49:27.466934 4766 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bb2779e9-fd47-4ea6-a75c-b0c24339b1c5" Dec 13 03:49:27 crc kubenswrapper[4766]: I1213 03:49:27.466629 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"9acce836a761ed8d0388ba6581b978d088261b2857d3d910534ddca9615db892"} Dec 13 03:49:27 crc kubenswrapper[4766]: I1213 03:49:27.466962 4766 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bb2779e9-fd47-4ea6-a75c-b0c24339b1c5" Dec 13 03:49:27 crc kubenswrapper[4766]: I1213 03:49:27.467792 4766 status_manager.go:851] "Failed to get status for pod" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" pod="openshift-marketplace/community-operators-6dqhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dqhc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:27 crc kubenswrapper[4766]: I1213 03:49:27.468188 4766 status_manager.go:851] "Failed to get status for pod" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" pod="openshift-marketplace/certified-operators-hzmqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzmqb\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:27 crc kubenswrapper[4766]: I1213 03:49:27.468784 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:27 crc kubenswrapper[4766]: I1213 03:49:27.469227 4766 status_manager.go:851] "Failed to get status for pod" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" pod="openshift-marketplace/certified-operators-p7x6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p7x6l\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:27 crc kubenswrapper[4766]: I1213 03:49:27.469863 4766 status_manager.go:851] "Failed to get status for pod" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:27 crc kubenswrapper[4766]: I1213 03:49:27.470137 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:27 crc kubenswrapper[4766]: I1213 03:49:27.470326 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 13 03:49:27 crc kubenswrapper[4766]: I1213 03:49:27.470372 4766 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f" exitCode=1 Dec 13 03:49:27 crc kubenswrapper[4766]: I1213 03:49:27.470394 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f"} Dec 13 03:49:27 crc kubenswrapper[4766]: I1213 03:49:27.470613 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:27 crc kubenswrapper[4766]: I1213 03:49:27.470979 4766 scope.go:117] "RemoveContainer" containerID="71f756a111757421de2eb6130ed7ae8cd10a2fbdb4c7943534f3b31dead0186f" Dec 13 03:49:27 crc kubenswrapper[4766]: I1213 03:49:27.471014 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:27 crc kubenswrapper[4766]: I1213 03:49:27.471476 4766 status_manager.go:851] "Failed to get status for pod" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" pod="openshift-marketplace/redhat-marketplace-d95nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d95nn\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:27 crc kubenswrapper[4766]: E1213 03:49:27.471844 4766 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:49:27 crc kubenswrapper[4766]: I1213 03:49:27.472031 4766 status_manager.go:851] "Failed to get status for pod" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" pod="openshift-marketplace/redhat-operators-kq9t5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kq9t5\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:27 crc kubenswrapper[4766]: I1213 03:49:27.475608 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:27 crc kubenswrapper[4766]: I1213 03:49:27.476344 4766 status_manager.go:851] "Failed to get status for pod" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" pod="openshift-authentication/oauth-openshift-558db77b4-l2gzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-l2gzj\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:27 crc kubenswrapper[4766]: I1213 03:49:27.481348 4766 status_manager.go:851] "Failed to get status for pod" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" pod="openshift-marketplace/certified-operators-p7x6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p7x6l\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:27 crc kubenswrapper[4766]: I1213 03:49:27.482173 4766 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:27 crc kubenswrapper[4766]: I1213 03:49:27.482894 4766 status_manager.go:851] "Failed to get status for pod" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:27 crc kubenswrapper[4766]: I1213 03:49:27.483764 4766 status_manager.go:851] "Failed to get status for pod" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" pod="openshift-marketplace/community-operators-wb4lm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wb4lm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:27 crc kubenswrapper[4766]: I1213 03:49:27.484393 4766 status_manager.go:851] "Failed to get status for pod" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" pod="openshift-marketplace/redhat-marketplace-fsl6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fsl6d\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:27 crc kubenswrapper[4766]: I1213 03:49:27.484834 4766 status_manager.go:851] "Failed to get status for pod" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" pod="openshift-controller-manager/controller-manager-6c4b47bc-vshkm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6c4b47bc-vshkm\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:27 crc kubenswrapper[4766]: I1213 03:49:27.485624 4766 status_manager.go:851] "Failed to get status for pod" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" pod="openshift-marketplace/redhat-marketplace-d95nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d95nn\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:27 crc kubenswrapper[4766]: I1213 03:49:27.486405 4766 status_manager.go:851] "Failed to get status for pod" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" pod="openshift-marketplace/redhat-operators-kq9t5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kq9t5\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:27 crc kubenswrapper[4766]: I1213 03:49:27.486824 4766 status_manager.go:851] "Failed to get status for pod" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" pod="openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5c884cd845-kkzq2\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:27 crc kubenswrapper[4766]: I1213 03:49:27.487716 4766 status_manager.go:851] "Failed to get status for pod" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" pod="openshift-marketplace/community-operators-6dqhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dqhc\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:27 crc kubenswrapper[4766]: I1213 03:49:27.488328 4766 status_manager.go:851] "Failed to get status for pod" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" pod="openshift-marketplace/certified-operators-hzmqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzmqb\": dial tcp 38.102.83.212:6443: connect: connection refused" Dec 13 03:49:27 crc kubenswrapper[4766]: I1213 03:49:27.751286 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 03:49:28 crc kubenswrapper[4766]: I1213 03:49:28.482555 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 13 03:49:28 crc kubenswrapper[4766]: I1213 03:49:28.482765 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"da37869dcbc46ce4000d2f7726293f3811614609b9eb09b184c109cb1e04ec5a"} Dec 13 03:49:28 crc kubenswrapper[4766]: I1213 03:49:28.485782 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e6e1eb1b7a8d7ab11552ad4d60658f87abf3425ef6ee8a744cb539b17c8404c5"} Dec 13 03:49:28 crc kubenswrapper[4766]: I1213 03:49:28.485828 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"dc83fd5a614285df60c34c61b69940f81d431fa819a1bbf42f645d3b6edad788"} Dec 13 03:49:28 crc kubenswrapper[4766]: I1213 03:49:28.485840 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a151da8a809c0098d00466967f8f01ad3d457e4c4437c126795785e4e512dc1d"} Dec 13 03:49:28 crc kubenswrapper[4766]: I1213 03:49:28.782909 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz6bv\" (UniqueName: \"kubernetes.io/projected/19086cc5-bade-4f53-90cc-0370b7d2f6a9-kube-api-access-sz6bv\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:29 crc kubenswrapper[4766]: I1213 03:49:29.495480 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8681ec0548e70fd4290b24367203f8ec0836adabb5aa192bada0abd3ef03cb5f"} Dec 13 03:49:29 crc kubenswrapper[4766]: I1213 03:49:29.495872 4766 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bb2779e9-fd47-4ea6-a75c-b0c24339b1c5" Dec 13 03:49:29 crc kubenswrapper[4766]: I1213 03:49:29.495907 4766 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bb2779e9-fd47-4ea6-a75c-b0c24339b1c5" Dec 13 03:49:29 crc kubenswrapper[4766]: I1213 03:49:29.495905 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"eaaf8b53dcee20aa21fd8a1fd8fe7769b65c73dddf809dd9bc4be0ef2fa35165"} Dec 13 03:49:29 crc kubenswrapper[4766]: I1213 03:49:29.635566 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:49:29 crc kubenswrapper[4766]: I1213 03:49:29.635606 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:49:29 crc kubenswrapper[4766]: I1213 03:49:29.641816 4766 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 13 03:49:29 crc kubenswrapper[4766]: [+]log ok Dec 13 03:49:29 crc kubenswrapper[4766]: [+]etcd ok Dec 13 03:49:29 crc kubenswrapper[4766]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 13 03:49:29 crc kubenswrapper[4766]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 13 03:49:29 crc kubenswrapper[4766]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 13 03:49:29 crc kubenswrapper[4766]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 13 03:49:29 crc kubenswrapper[4766]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 13 03:49:29 crc kubenswrapper[4766]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 13 03:49:29 crc kubenswrapper[4766]: [+]poststarthook/generic-apiserver-start-informers ok Dec 13 03:49:29 crc kubenswrapper[4766]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 13 03:49:29 crc kubenswrapper[4766]: [+]poststarthook/priority-and-fairness-filter ok Dec 13 03:49:29 crc kubenswrapper[4766]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 13 03:49:29 crc kubenswrapper[4766]: [+]poststarthook/start-apiextensions-informers ok Dec 13 03:49:29 crc kubenswrapper[4766]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Dec 13 03:49:29 crc kubenswrapper[4766]: [-]poststarthook/crd-informer-synced failed: reason withheld Dec 13 03:49:29 crc kubenswrapper[4766]: [+]poststarthook/start-system-namespaces-controller ok Dec 13 03:49:29 crc kubenswrapper[4766]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 13 03:49:29 crc kubenswrapper[4766]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 13 03:49:29 crc kubenswrapper[4766]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 13 03:49:29 crc kubenswrapper[4766]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 13 03:49:29 crc kubenswrapper[4766]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 13 03:49:29 crc kubenswrapper[4766]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 13 03:49:29 crc kubenswrapper[4766]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Dec 13 03:49:29 crc kubenswrapper[4766]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 13 03:49:29 crc kubenswrapper[4766]: [+]poststarthook/bootstrap-controller ok Dec 13 03:49:29 crc kubenswrapper[4766]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 13 03:49:29 crc kubenswrapper[4766]: [+]poststarthook/start-kube-aggregator-informers ok Dec 13 03:49:29 crc kubenswrapper[4766]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 13 03:49:29 crc kubenswrapper[4766]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 13 03:49:29 crc kubenswrapper[4766]: [+]poststarthook/apiservice-registration-controller ok Dec 13 03:49:29 crc kubenswrapper[4766]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 13 03:49:29 crc kubenswrapper[4766]: [+]poststarthook/apiservice-discovery-controller ok Dec 13 03:49:29 crc kubenswrapper[4766]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 13 03:49:29 crc kubenswrapper[4766]: [+]autoregister-completion ok Dec 13 03:49:29 crc kubenswrapper[4766]: [+]poststarthook/apiservice-openapi-controller ok Dec 13 03:49:29 crc kubenswrapper[4766]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 13 03:49:29 crc kubenswrapper[4766]: livez check failed Dec 13 03:49:29 crc kubenswrapper[4766]: I1213 03:49:29.643218 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 03:49:31 crc kubenswrapper[4766]: I1213 03:49:31.334015 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 03:49:34 crc kubenswrapper[4766]: I1213 03:49:34.460172 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz6bv\" (UniqueName: \"kubernetes.io/projected/19086cc5-bade-4f53-90cc-0370b7d2f6a9-kube-api-access-sz6bv\") pod \"oauth-openshift-f6658f7c8-7lmh4\" (UID: \"19086cc5-bade-4f53-90cc-0370b7d2f6a9\") " pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:34 crc kubenswrapper[4766]: I1213 03:49:34.517937 4766 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:49:34 crc kubenswrapper[4766]: I1213 03:49:34.634999 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:49:34 crc kubenswrapper[4766]: I1213 03:49:34.639465 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:49:34 crc kubenswrapper[4766]: I1213 03:49:34.642963 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:34 crc kubenswrapper[4766]: I1213 03:49:34.774245 4766 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="bca58e42-fa3c-4503-9858-447d739fc6f8" Dec 13 03:49:35 crc kubenswrapper[4766]: I1213 03:49:35.531965 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" event={"ID":"19086cc5-bade-4f53-90cc-0370b7d2f6a9","Type":"ContainerStarted","Data":"3ccbd9f61d798d009a570bfa061807a50d4dc1093d3361a26d7ca306db17db8d"} Dec 13 03:49:35 crc kubenswrapper[4766]: I1213 03:49:35.532035 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" event={"ID":"19086cc5-bade-4f53-90cc-0370b7d2f6a9","Type":"ContainerStarted","Data":"7844511a7dd2f21c8fc51378c8b485da8ce961dab4ebc1a718e33298f1c1cc9c"} Dec 13 03:49:35 crc kubenswrapper[4766]: I1213 03:49:35.532598 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:35 crc kubenswrapper[4766]: I1213 03:49:35.532852 4766 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bb2779e9-fd47-4ea6-a75c-b0c24339b1c5" Dec 13 03:49:35 crc kubenswrapper[4766]: I1213 03:49:35.532904 4766 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bb2779e9-fd47-4ea6-a75c-b0c24339b1c5" Dec 13 03:49:35 crc kubenswrapper[4766]: I1213 03:49:35.559887 4766 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="bca58e42-fa3c-4503-9858-447d739fc6f8" Dec 13 03:49:35 crc kubenswrapper[4766]: I1213 03:49:35.745124 4766 patch_prober.go:28] interesting pod/oauth-openshift-f6658f7c8-7lmh4 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.61:6443/healthz\": read tcp 10.217.0.2:34902->10.217.0.61:6443: read: connection reset by peer" start-of-body= Dec 13 03:49:35 crc kubenswrapper[4766]: I1213 03:49:35.745227 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" podUID="19086cc5-bade-4f53-90cc-0370b7d2f6a9" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.61:6443/healthz\": read tcp 10.217.0.2:34902->10.217.0.61:6443: read: connection reset by peer" Dec 13 03:49:36 crc kubenswrapper[4766]: I1213 03:49:36.540030 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-f6658f7c8-7lmh4_19086cc5-bade-4f53-90cc-0370b7d2f6a9/oauth-openshift/0.log" Dec 13 03:49:36 crc kubenswrapper[4766]: I1213 03:49:36.540116 4766 generic.go:334] "Generic (PLEG): container finished" podID="19086cc5-bade-4f53-90cc-0370b7d2f6a9" containerID="3ccbd9f61d798d009a570bfa061807a50d4dc1093d3361a26d7ca306db17db8d" exitCode=255 Dec 13 03:49:36 crc kubenswrapper[4766]: I1213 03:49:36.540176 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" event={"ID":"19086cc5-bade-4f53-90cc-0370b7d2f6a9","Type":"ContainerDied","Data":"3ccbd9f61d798d009a570bfa061807a50d4dc1093d3361a26d7ca306db17db8d"} Dec 13 03:49:36 crc kubenswrapper[4766]: I1213 03:49:36.540690 4766 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bb2779e9-fd47-4ea6-a75c-b0c24339b1c5" Dec 13 03:49:36 crc kubenswrapper[4766]: I1213 03:49:36.540714 4766 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bb2779e9-fd47-4ea6-a75c-b0c24339b1c5" Dec 13 03:49:36 crc kubenswrapper[4766]: I1213 03:49:36.540887 4766 scope.go:117] "RemoveContainer" containerID="3ccbd9f61d798d009a570bfa061807a50d4dc1093d3361a26d7ca306db17db8d" Dec 13 03:49:36 crc kubenswrapper[4766]: I1213 03:49:36.543998 4766 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="bca58e42-fa3c-4503-9858-447d739fc6f8" Dec 13 03:49:37 crc kubenswrapper[4766]: I1213 03:49:37.562199 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-f6658f7c8-7lmh4_19086cc5-bade-4f53-90cc-0370b7d2f6a9/oauth-openshift/1.log" Dec 13 03:49:37 crc kubenswrapper[4766]: I1213 03:49:37.563120 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-f6658f7c8-7lmh4_19086cc5-bade-4f53-90cc-0370b7d2f6a9/oauth-openshift/0.log" Dec 13 03:49:37 crc kubenswrapper[4766]: I1213 03:49:37.563177 4766 generic.go:334] "Generic (PLEG): container finished" podID="19086cc5-bade-4f53-90cc-0370b7d2f6a9" containerID="f967e26eb71eeb9671210b9bc17e837f6d6a6ef09a5f6ed3facdac11c8db4ca3" exitCode=255 Dec 13 03:49:37 crc kubenswrapper[4766]: I1213 03:49:37.563274 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" event={"ID":"19086cc5-bade-4f53-90cc-0370b7d2f6a9","Type":"ContainerDied","Data":"f967e26eb71eeb9671210b9bc17e837f6d6a6ef09a5f6ed3facdac11c8db4ca3"} Dec 13 03:49:37 crc kubenswrapper[4766]: I1213 03:49:37.563377 4766 scope.go:117] "RemoveContainer" containerID="3ccbd9f61d798d009a570bfa061807a50d4dc1093d3361a26d7ca306db17db8d" Dec 13 03:49:37 crc kubenswrapper[4766]: I1213 03:49:37.564276 4766 scope.go:117] "RemoveContainer" containerID="f967e26eb71eeb9671210b9bc17e837f6d6a6ef09a5f6ed3facdac11c8db4ca3" Dec 13 03:49:37 crc kubenswrapper[4766]: E1213 03:49:37.566903 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-f6658f7c8-7lmh4_openshift-authentication(19086cc5-bade-4f53-90cc-0370b7d2f6a9)\"" pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" podUID="19086cc5-bade-4f53-90cc-0370b7d2f6a9" Dec 13 03:49:37 crc kubenswrapper[4766]: I1213 03:49:37.751249 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 03:49:37 crc kubenswrapper[4766]: I1213 03:49:37.755657 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 03:49:38 crc kubenswrapper[4766]: I1213 03:49:38.571177 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-f6658f7c8-7lmh4_19086cc5-bade-4f53-90cc-0370b7d2f6a9/oauth-openshift/1.log" Dec 13 03:49:38 crc kubenswrapper[4766]: I1213 03:49:38.571657 4766 scope.go:117] "RemoveContainer" containerID="f967e26eb71eeb9671210b9bc17e837f6d6a6ef09a5f6ed3facdac11c8db4ca3" Dec 13 03:49:38 crc kubenswrapper[4766]: E1213 03:49:38.571963 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-f6658f7c8-7lmh4_openshift-authentication(19086cc5-bade-4f53-90cc-0370b7d2f6a9)\"" pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" podUID="19086cc5-bade-4f53-90cc-0370b7d2f6a9" Dec 13 03:49:38 crc kubenswrapper[4766]: I1213 03:49:38.576690 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 03:49:44 crc kubenswrapper[4766]: I1213 03:49:44.643098 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:44 crc kubenswrapper[4766]: I1213 03:49:44.643815 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:49:44 crc kubenswrapper[4766]: I1213 03:49:44.645137 4766 scope.go:117] "RemoveContainer" containerID="f967e26eb71eeb9671210b9bc17e837f6d6a6ef09a5f6ed3facdac11c8db4ca3" Dec 13 03:49:44 crc kubenswrapper[4766]: E1213 03:49:44.645860 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-f6658f7c8-7lmh4_openshift-authentication(19086cc5-bade-4f53-90cc-0370b7d2f6a9)\"" pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" podUID="19086cc5-bade-4f53-90cc-0370b7d2f6a9" Dec 13 03:49:45 crc kubenswrapper[4766]: I1213 03:49:45.212150 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 13 03:49:45 crc kubenswrapper[4766]: I1213 03:49:45.250842 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 13 03:49:45 crc kubenswrapper[4766]: I1213 03:49:45.281382 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 13 03:49:45 crc kubenswrapper[4766]: I1213 03:49:45.446630 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 13 03:49:45 crc kubenswrapper[4766]: I1213 03:49:45.577951 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 13 03:49:45 crc kubenswrapper[4766]: I1213 03:49:45.833368 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 13 03:49:45 crc kubenswrapper[4766]: I1213 03:49:45.923994 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 13 03:49:45 crc kubenswrapper[4766]: I1213 03:49:45.979701 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 13 03:49:45 crc kubenswrapper[4766]: I1213 03:49:45.986781 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 13 03:49:46 crc kubenswrapper[4766]: I1213 03:49:46.042524 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 13 03:49:46 crc kubenswrapper[4766]: I1213 03:49:46.088532 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 13 03:49:46 crc kubenswrapper[4766]: I1213 03:49:46.160609 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 13 03:49:46 crc kubenswrapper[4766]: I1213 03:49:46.361827 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 13 03:49:46 crc kubenswrapper[4766]: I1213 03:49:46.415542 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 13 03:49:46 crc kubenswrapper[4766]: I1213 03:49:46.491823 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 13 03:49:46 crc kubenswrapper[4766]: I1213 03:49:46.524228 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 13 03:49:46 crc kubenswrapper[4766]: I1213 03:49:46.751770 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 13 03:49:46 crc kubenswrapper[4766]: I1213 03:49:46.865468 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 13 03:49:46 crc kubenswrapper[4766]: I1213 03:49:46.885671 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 13 03:49:47 crc kubenswrapper[4766]: I1213 03:49:47.015476 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 13 03:49:47 crc kubenswrapper[4766]: I1213 03:49:47.031582 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 13 03:49:47 crc kubenswrapper[4766]: I1213 03:49:47.079454 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 13 03:49:47 crc kubenswrapper[4766]: I1213 03:49:47.198935 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 13 03:49:47 crc kubenswrapper[4766]: I1213 03:49:47.203152 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 13 03:49:47 crc kubenswrapper[4766]: I1213 03:49:47.447239 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 13 03:49:47 crc kubenswrapper[4766]: I1213 03:49:47.496038 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 13 03:49:47 crc kubenswrapper[4766]: I1213 03:49:47.588256 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 13 03:49:47 crc kubenswrapper[4766]: I1213 03:49:47.650112 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 13 03:49:47 crc kubenswrapper[4766]: I1213 03:49:47.759127 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 13 03:49:48 crc kubenswrapper[4766]: I1213 03:49:48.014903 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 13 03:49:48 crc kubenswrapper[4766]: I1213 03:49:48.054907 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 13 03:49:48 crc kubenswrapper[4766]: I1213 03:49:48.227524 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 13 03:49:48 crc kubenswrapper[4766]: I1213 03:49:48.277745 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 13 03:49:48 crc kubenswrapper[4766]: I1213 03:49:48.431084 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 13 03:49:48 crc kubenswrapper[4766]: I1213 03:49:48.483780 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 13 03:49:48 crc kubenswrapper[4766]: I1213 03:49:48.574668 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 13 03:49:48 crc kubenswrapper[4766]: I1213 03:49:48.656776 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 13 03:49:48 crc kubenswrapper[4766]: I1213 03:49:48.708573 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 13 03:49:48 crc kubenswrapper[4766]: I1213 03:49:48.713654 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 13 03:49:48 crc kubenswrapper[4766]: I1213 03:49:48.725875 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 13 03:49:48 crc kubenswrapper[4766]: I1213 03:49:48.748095 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 13 03:49:48 crc kubenswrapper[4766]: I1213 03:49:48.918809 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 13 03:49:48 crc kubenswrapper[4766]: I1213 03:49:48.967964 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 13 03:49:48 crc kubenswrapper[4766]: I1213 03:49:48.998038 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 13 03:49:49 crc kubenswrapper[4766]: I1213 03:49:49.043793 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 13 03:49:49 crc kubenswrapper[4766]: I1213 03:49:49.132865 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 13 03:49:49 crc kubenswrapper[4766]: I1213 03:49:49.168189 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 13 03:49:49 crc kubenswrapper[4766]: I1213 03:49:49.290612 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 13 03:49:49 crc kubenswrapper[4766]: I1213 03:49:49.301582 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 13 03:49:49 crc kubenswrapper[4766]: I1213 03:49:49.314402 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 13 03:49:49 crc kubenswrapper[4766]: I1213 03:49:49.390014 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 13 03:49:49 crc kubenswrapper[4766]: I1213 03:49:49.405711 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 13 03:49:49 crc kubenswrapper[4766]: I1213 03:49:49.536595 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 13 03:49:49 crc kubenswrapper[4766]: I1213 03:49:49.543223 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 13 03:49:49 crc kubenswrapper[4766]: I1213 03:49:49.543247 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 13 03:49:49 crc kubenswrapper[4766]: I1213 03:49:49.574906 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 13 03:49:49 crc kubenswrapper[4766]: I1213 03:49:49.580199 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 13 03:49:49 crc kubenswrapper[4766]: I1213 03:49:49.619839 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 13 03:49:49 crc kubenswrapper[4766]: I1213 03:49:49.622653 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 13 03:49:49 crc kubenswrapper[4766]: I1213 03:49:49.713750 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 13 03:49:49 crc kubenswrapper[4766]: I1213 03:49:49.816299 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 13 03:49:49 crc kubenswrapper[4766]: I1213 03:49:49.878541 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 13 03:49:49 crc kubenswrapper[4766]: I1213 03:49:49.948351 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 13 03:49:49 crc kubenswrapper[4766]: I1213 03:49:49.987939 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 13 03:49:50 crc kubenswrapper[4766]: I1213 03:49:50.226138 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 13 03:49:50 crc kubenswrapper[4766]: I1213 03:49:50.265122 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 13 03:49:50 crc kubenswrapper[4766]: I1213 03:49:50.294817 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 13 03:49:50 crc kubenswrapper[4766]: I1213 03:49:50.339227 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 13 03:49:50 crc kubenswrapper[4766]: I1213 03:49:50.346675 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 13 03:49:50 crc kubenswrapper[4766]: I1213 03:49:50.358747 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 13 03:49:50 crc kubenswrapper[4766]: I1213 03:49:50.373086 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 13 03:49:50 crc kubenswrapper[4766]: I1213 03:49:50.377638 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 13 03:49:50 crc kubenswrapper[4766]: I1213 03:49:50.401868 4766 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 13 03:49:50 crc kubenswrapper[4766]: I1213 03:49:50.557296 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 13 03:49:50 crc kubenswrapper[4766]: I1213 03:49:50.626974 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 13 03:49:50 crc kubenswrapper[4766]: I1213 03:49:50.674200 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 13 03:49:50 crc kubenswrapper[4766]: I1213 03:49:50.860470 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 13 03:49:50 crc kubenswrapper[4766]: I1213 03:49:50.861050 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 13 03:49:50 crc kubenswrapper[4766]: I1213 03:49:50.865016 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 13 03:49:50 crc kubenswrapper[4766]: I1213 03:49:50.917135 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 13 03:49:50 crc kubenswrapper[4766]: I1213 03:49:50.955278 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 13 03:49:50 crc kubenswrapper[4766]: I1213 03:49:50.968572 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 13 03:49:51 crc kubenswrapper[4766]: I1213 03:49:51.095966 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 13 03:49:51 crc kubenswrapper[4766]: I1213 03:49:51.173446 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 13 03:49:51 crc kubenswrapper[4766]: I1213 03:49:51.175188 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 13 03:49:51 crc kubenswrapper[4766]: I1213 03:49:51.226820 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 13 03:49:51 crc kubenswrapper[4766]: I1213 03:49:51.351271 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 13 03:49:51 crc kubenswrapper[4766]: I1213 03:49:51.374656 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 13 03:49:51 crc kubenswrapper[4766]: I1213 03:49:51.423168 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 13 03:49:51 crc kubenswrapper[4766]: I1213 03:49:51.429044 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 13 03:49:51 crc kubenswrapper[4766]: I1213 03:49:51.449454 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 13 03:49:51 crc kubenswrapper[4766]: I1213 03:49:51.505946 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 13 03:49:51 crc kubenswrapper[4766]: I1213 03:49:51.524093 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 13 03:49:51 crc kubenswrapper[4766]: I1213 03:49:51.572535 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 13 03:49:51 crc kubenswrapper[4766]: I1213 03:49:51.582884 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 13 03:49:51 crc kubenswrapper[4766]: I1213 03:49:51.603783 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 13 03:49:51 crc kubenswrapper[4766]: I1213 03:49:51.663302 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 13 03:49:51 crc kubenswrapper[4766]: I1213 03:49:51.674289 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 13 03:49:51 crc kubenswrapper[4766]: I1213 03:49:51.731756 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 13 03:49:51 crc kubenswrapper[4766]: I1213 03:49:51.951834 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 13 03:49:52 crc kubenswrapper[4766]: I1213 03:49:52.164569 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 13 03:49:52 crc kubenswrapper[4766]: I1213 03:49:52.168572 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 13 03:49:52 crc kubenswrapper[4766]: I1213 03:49:52.168605 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 13 03:49:52 crc kubenswrapper[4766]: I1213 03:49:52.169102 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 13 03:49:52 crc kubenswrapper[4766]: I1213 03:49:52.205553 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 13 03:49:52 crc kubenswrapper[4766]: I1213 03:49:52.351699 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 13 03:49:52 crc kubenswrapper[4766]: I1213 03:49:52.383526 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 13 03:49:52 crc kubenswrapper[4766]: I1213 03:49:52.469253 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 13 03:49:52 crc kubenswrapper[4766]: I1213 03:49:52.545386 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 13 03:49:52 crc kubenswrapper[4766]: I1213 03:49:52.630411 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 13 03:49:52 crc kubenswrapper[4766]: I1213 03:49:52.631083 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 13 03:49:52 crc kubenswrapper[4766]: I1213 03:49:52.644194 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 13 03:49:52 crc kubenswrapper[4766]: I1213 03:49:52.708101 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 13 03:49:52 crc kubenswrapper[4766]: I1213 03:49:52.716039 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 13 03:49:52 crc kubenswrapper[4766]: I1213 03:49:52.759064 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 13 03:49:52 crc kubenswrapper[4766]: I1213 03:49:52.770809 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 13 03:49:52 crc kubenswrapper[4766]: I1213 03:49:52.770863 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 13 03:49:52 crc kubenswrapper[4766]: I1213 03:49:52.914294 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 13 03:49:53 crc kubenswrapper[4766]: I1213 03:49:53.069031 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 13 03:49:53 crc kubenswrapper[4766]: I1213 03:49:53.082356 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 13 03:49:53 crc kubenswrapper[4766]: I1213 03:49:53.204384 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 13 03:49:53 crc kubenswrapper[4766]: I1213 03:49:53.204391 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 13 03:49:53 crc kubenswrapper[4766]: I1213 03:49:53.204617 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 13 03:49:53 crc kubenswrapper[4766]: I1213 03:49:53.204631 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 13 03:49:53 crc kubenswrapper[4766]: I1213 03:49:53.270811 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 13 03:49:53 crc kubenswrapper[4766]: I1213 03:49:53.272950 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 13 03:49:53 crc kubenswrapper[4766]: I1213 03:49:53.305491 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 13 03:49:53 crc kubenswrapper[4766]: I1213 03:49:53.330682 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 13 03:49:53 crc kubenswrapper[4766]: I1213 03:49:53.455290 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 13 03:49:53 crc kubenswrapper[4766]: I1213 03:49:53.538403 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 13 03:49:53 crc kubenswrapper[4766]: I1213 03:49:53.767846 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 13 03:49:53 crc kubenswrapper[4766]: I1213 03:49:53.871210 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.001755 4766 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.024829 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.027591 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.036036 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.107751 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.316959 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.352018 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.439145 4766 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.440468 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wb4lm" podStartSLOduration=46.192352618 podStartE2EDuration="2m45.440415768s" podCreationTimestamp="2025-12-13 03:47:09 +0000 UTC" firstStartedPulling="2025-12-13 03:47:15.601164219 +0000 UTC m=+167.111097183" lastFinishedPulling="2025-12-13 03:49:14.849227369 +0000 UTC m=+286.359160333" observedRunningTime="2025-12-13 03:49:34.615856197 +0000 UTC m=+306.125789181" watchObservedRunningTime="2025-12-13 03:49:54.440415768 +0000 UTC m=+325.950348742" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.440657 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d95nn" podStartSLOduration=45.851160058 podStartE2EDuration="2m42.440651295s" podCreationTimestamp="2025-12-13 03:47:12 +0000 UTC" firstStartedPulling="2025-12-13 03:47:18.399793975 +0000 UTC m=+169.909726949" lastFinishedPulling="2025-12-13 03:49:14.989285222 +0000 UTC m=+286.499218186" observedRunningTime="2025-12-13 03:49:34.684543654 +0000 UTC m=+306.194476628" watchObservedRunningTime="2025-12-13 03:49:54.440651295 +0000 UTC m=+325.950584289" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.442903 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fsl6d" podStartSLOduration=49.533142934 podStartE2EDuration="2m43.442891081s" podCreationTimestamp="2025-12-13 03:47:11 +0000 UTC" firstStartedPulling="2025-12-13 03:47:21.047993184 +0000 UTC m=+172.557926148" lastFinishedPulling="2025-12-13 03:49:14.957741331 +0000 UTC m=+286.467674295" observedRunningTime="2025-12-13 03:49:34.629752217 +0000 UTC m=+306.139685181" watchObservedRunningTime="2025-12-13 03:49:54.442891081 +0000 UTC m=+325.952824065" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.443287 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p7x6l" podStartSLOduration=48.422478421 podStartE2EDuration="2m45.443281852s" podCreationTimestamp="2025-12-13 03:47:09 +0000 UTC" firstStartedPulling="2025-12-13 03:47:15.611532944 +0000 UTC m=+167.121465908" lastFinishedPulling="2025-12-13 03:49:12.632336375 +0000 UTC m=+284.142269339" observedRunningTime="2025-12-13 03:49:34.524895193 +0000 UTC m=+306.034828157" watchObservedRunningTime="2025-12-13 03:49:54.443281852 +0000 UTC m=+325.953214826" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.443850 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hzmqb" podStartSLOduration=48.483511334 podStartE2EDuration="2m45.443844449s" podCreationTimestamp="2025-12-13 03:47:09 +0000 UTC" firstStartedPulling="2025-12-13 03:47:15.59779309 +0000 UTC m=+167.107726054" lastFinishedPulling="2025-12-13 03:49:12.558126205 +0000 UTC m=+284.068059169" observedRunningTime="2025-12-13 03:49:34.497123874 +0000 UTC m=+306.007056838" watchObservedRunningTime="2025-12-13 03:49:54.443844449 +0000 UTC m=+325.953777433" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.444458 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kq9t5" podStartSLOduration=48.772911915 podStartE2EDuration="2m41.444419136s" podCreationTimestamp="2025-12-13 03:47:13 +0000 UTC" firstStartedPulling="2025-12-13 03:47:19.975412854 +0000 UTC m=+171.485345828" lastFinishedPulling="2025-12-13 03:49:12.646920065 +0000 UTC m=+284.156853049" observedRunningTime="2025-12-13 03:49:34.69965909 +0000 UTC m=+306.209592074" watchObservedRunningTime="2025-12-13 03:49:54.444419136 +0000 UTC m=+325.954352110" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.444701 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6dqhc" podStartSLOduration=45.192664786 podStartE2EDuration="2m45.444696824s" podCreationTimestamp="2025-12-13 03:47:09 +0000 UTC" firstStartedPulling="2025-12-13 03:47:14.476133799 +0000 UTC m=+165.986066763" lastFinishedPulling="2025-12-13 03:49:14.728165837 +0000 UTC m=+286.238098801" observedRunningTime="2025-12-13 03:49:34.479204505 +0000 UTC m=+305.989137469" watchObservedRunningTime="2025-12-13 03:49:54.444696824 +0000 UTC m=+325.954629798" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.446462 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6c4b47bc-vshkm","openshift-route-controller-manager/route-controller-manager-5c884cd845-kkzq2","openshift-authentication/oauth-openshift-558db77b4-l2gzj","openshift-kube-apiserver/kube-apiserver-crc"] Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.446550 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-route-controller-manager/route-controller-manager-5fb5df5559-vkjxg","openshift-controller-manager/controller-manager-75459f8749-sm486"] Dec 13 03:49:54 crc kubenswrapper[4766]: E1213 03:49:54.446847 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" containerName="installer" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.446880 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" containerName="installer" Dec 13 03:49:54 crc kubenswrapper[4766]: E1213 03:49:54.446948 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" containerName="controller-manager" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.446960 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" containerName="controller-manager" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.447273 4766 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bb2779e9-fd47-4ea6-a75c-b0c24339b1c5" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.447353 4766 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bb2779e9-fd47-4ea6-a75c-b0c24339b1c5" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.447759 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="151a6cc7-14cd-4ac6-a65e-d165c2e8520f" containerName="installer" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.447790 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" containerName="controller-manager" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.448863 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-f6658f7c8-7lmh4"] Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.449051 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fb5df5559-vkjxg" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.449216 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75459f8749-sm486" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.451097 4766 scope.go:117] "RemoveContainer" containerID="f967e26eb71eeb9671210b9bc17e837f6d6a6ef09a5f6ed3facdac11c8db4ca3" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.452938 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.453093 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.455300 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.462666 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.462725 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.462893 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.462937 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.463271 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.463375 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.463475 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.463628 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.463702 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.463758 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.467915 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.484782 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=20.484751806 podStartE2EDuration="20.484751806s" podCreationTimestamp="2025-12-13 03:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:49:54.480883622 +0000 UTC m=+325.990816586" watchObservedRunningTime="2025-12-13 03:49:54.484751806 +0000 UTC m=+325.994684770" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.640175 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.656364 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.666505 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.680200 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7hdp\" (UniqueName: \"kubernetes.io/projected/812ac446-aef3-4824-8ea8-7857a0067955-kube-api-access-m7hdp\") pod \"controller-manager-75459f8749-sm486\" (UID: \"812ac446-aef3-4824-8ea8-7857a0067955\") " pod="openshift-controller-manager/controller-manager-75459f8749-sm486" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.680779 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxqw5\" (UniqueName: \"kubernetes.io/projected/44f14baf-9051-46b3-8f2b-0f65ef2805cc-kube-api-access-wxqw5\") pod \"route-controller-manager-5fb5df5559-vkjxg\" (UID: \"44f14baf-9051-46b3-8f2b-0f65ef2805cc\") " pod="openshift-route-controller-manager/route-controller-manager-5fb5df5559-vkjxg" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.680856 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/812ac446-aef3-4824-8ea8-7857a0067955-client-ca\") pod \"controller-manager-75459f8749-sm486\" (UID: \"812ac446-aef3-4824-8ea8-7857a0067955\") " pod="openshift-controller-manager/controller-manager-75459f8749-sm486" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.680884 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44f14baf-9051-46b3-8f2b-0f65ef2805cc-client-ca\") pod \"route-controller-manager-5fb5df5559-vkjxg\" (UID: \"44f14baf-9051-46b3-8f2b-0f65ef2805cc\") " pod="openshift-route-controller-manager/route-controller-manager-5fb5df5559-vkjxg" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.680913 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44f14baf-9051-46b3-8f2b-0f65ef2805cc-config\") pod \"route-controller-manager-5fb5df5559-vkjxg\" (UID: \"44f14baf-9051-46b3-8f2b-0f65ef2805cc\") " pod="openshift-route-controller-manager/route-controller-manager-5fb5df5559-vkjxg" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.680947 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44f14baf-9051-46b3-8f2b-0f65ef2805cc-serving-cert\") pod \"route-controller-manager-5fb5df5559-vkjxg\" (UID: \"44f14baf-9051-46b3-8f2b-0f65ef2805cc\") " pod="openshift-route-controller-manager/route-controller-manager-5fb5df5559-vkjxg" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.680979 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/812ac446-aef3-4824-8ea8-7857a0067955-serving-cert\") pod \"controller-manager-75459f8749-sm486\" (UID: \"812ac446-aef3-4824-8ea8-7857a0067955\") " pod="openshift-controller-manager/controller-manager-75459f8749-sm486" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.681027 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/812ac446-aef3-4824-8ea8-7857a0067955-proxy-ca-bundles\") pod \"controller-manager-75459f8749-sm486\" (UID: \"812ac446-aef3-4824-8ea8-7857a0067955\") " pod="openshift-controller-manager/controller-manager-75459f8749-sm486" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.681153 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/812ac446-aef3-4824-8ea8-7857a0067955-config\") pod \"controller-manager-75459f8749-sm486\" (UID: \"812ac446-aef3-4824-8ea8-7857a0067955\") " pod="openshift-controller-manager/controller-manager-75459f8749-sm486" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.719781 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.782573 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/812ac446-aef3-4824-8ea8-7857a0067955-client-ca\") pod \"controller-manager-75459f8749-sm486\" (UID: \"812ac446-aef3-4824-8ea8-7857a0067955\") " pod="openshift-controller-manager/controller-manager-75459f8749-sm486" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.782712 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44f14baf-9051-46b3-8f2b-0f65ef2805cc-client-ca\") pod \"route-controller-manager-5fb5df5559-vkjxg\" (UID: \"44f14baf-9051-46b3-8f2b-0f65ef2805cc\") " pod="openshift-route-controller-manager/route-controller-manager-5fb5df5559-vkjxg" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.782792 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44f14baf-9051-46b3-8f2b-0f65ef2805cc-config\") pod \"route-controller-manager-5fb5df5559-vkjxg\" (UID: \"44f14baf-9051-46b3-8f2b-0f65ef2805cc\") " pod="openshift-route-controller-manager/route-controller-manager-5fb5df5559-vkjxg" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.782856 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44f14baf-9051-46b3-8f2b-0f65ef2805cc-serving-cert\") pod \"route-controller-manager-5fb5df5559-vkjxg\" (UID: \"44f14baf-9051-46b3-8f2b-0f65ef2805cc\") " pod="openshift-route-controller-manager/route-controller-manager-5fb5df5559-vkjxg" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.782884 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/812ac446-aef3-4824-8ea8-7857a0067955-serving-cert\") pod \"controller-manager-75459f8749-sm486\" (UID: \"812ac446-aef3-4824-8ea8-7857a0067955\") " pod="openshift-controller-manager/controller-manager-75459f8749-sm486" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.782949 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/812ac446-aef3-4824-8ea8-7857a0067955-proxy-ca-bundles\") pod \"controller-manager-75459f8749-sm486\" (UID: \"812ac446-aef3-4824-8ea8-7857a0067955\") " pod="openshift-controller-manager/controller-manager-75459f8749-sm486" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.783058 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/812ac446-aef3-4824-8ea8-7857a0067955-config\") pod \"controller-manager-75459f8749-sm486\" (UID: \"812ac446-aef3-4824-8ea8-7857a0067955\") " pod="openshift-controller-manager/controller-manager-75459f8749-sm486" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.783147 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7hdp\" (UniqueName: \"kubernetes.io/projected/812ac446-aef3-4824-8ea8-7857a0067955-kube-api-access-m7hdp\") pod \"controller-manager-75459f8749-sm486\" (UID: \"812ac446-aef3-4824-8ea8-7857a0067955\") " pod="openshift-controller-manager/controller-manager-75459f8749-sm486" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.783212 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxqw5\" (UniqueName: \"kubernetes.io/projected/44f14baf-9051-46b3-8f2b-0f65ef2805cc-kube-api-access-wxqw5\") pod \"route-controller-manager-5fb5df5559-vkjxg\" (UID: \"44f14baf-9051-46b3-8f2b-0f65ef2805cc\") " pod="openshift-route-controller-manager/route-controller-manager-5fb5df5559-vkjxg" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.783789 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.784228 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44f14baf-9051-46b3-8f2b-0f65ef2805cc-client-ca\") pod \"route-controller-manager-5fb5df5559-vkjxg\" (UID: \"44f14baf-9051-46b3-8f2b-0f65ef2805cc\") " pod="openshift-route-controller-manager/route-controller-manager-5fb5df5559-vkjxg" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.785134 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44f14baf-9051-46b3-8f2b-0f65ef2805cc-config\") pod \"route-controller-manager-5fb5df5559-vkjxg\" (UID: \"44f14baf-9051-46b3-8f2b-0f65ef2805cc\") " pod="openshift-route-controller-manager/route-controller-manager-5fb5df5559-vkjxg" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.785335 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/812ac446-aef3-4824-8ea8-7857a0067955-client-ca\") pod \"controller-manager-75459f8749-sm486\" (UID: \"812ac446-aef3-4824-8ea8-7857a0067955\") " pod="openshift-controller-manager/controller-manager-75459f8749-sm486" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.785600 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/812ac446-aef3-4824-8ea8-7857a0067955-proxy-ca-bundles\") pod \"controller-manager-75459f8749-sm486\" (UID: \"812ac446-aef3-4824-8ea8-7857a0067955\") " pod="openshift-controller-manager/controller-manager-75459f8749-sm486" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.785735 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/812ac446-aef3-4824-8ea8-7857a0067955-config\") pod \"controller-manager-75459f8749-sm486\" (UID: \"812ac446-aef3-4824-8ea8-7857a0067955\") " pod="openshift-controller-manager/controller-manager-75459f8749-sm486" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.791453 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44f14baf-9051-46b3-8f2b-0f65ef2805cc-serving-cert\") pod \"route-controller-manager-5fb5df5559-vkjxg\" (UID: \"44f14baf-9051-46b3-8f2b-0f65ef2805cc\") " pod="openshift-route-controller-manager/route-controller-manager-5fb5df5559-vkjxg" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.791466 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/812ac446-aef3-4824-8ea8-7857a0067955-serving-cert\") pod \"controller-manager-75459f8749-sm486\" (UID: \"812ac446-aef3-4824-8ea8-7857a0067955\") " pod="openshift-controller-manager/controller-manager-75459f8749-sm486" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.801777 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7hdp\" (UniqueName: \"kubernetes.io/projected/812ac446-aef3-4824-8ea8-7857a0067955-kube-api-access-m7hdp\") pod \"controller-manager-75459f8749-sm486\" (UID: \"812ac446-aef3-4824-8ea8-7857a0067955\") " pod="openshift-controller-manager/controller-manager-75459f8749-sm486" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.802787 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.805035 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxqw5\" (UniqueName: \"kubernetes.io/projected/44f14baf-9051-46b3-8f2b-0f65ef2805cc-kube-api-access-wxqw5\") pod \"route-controller-manager-5fb5df5559-vkjxg\" (UID: \"44f14baf-9051-46b3-8f2b-0f65ef2805cc\") " pod="openshift-route-controller-manager/route-controller-manager-5fb5df5559-vkjxg" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.920542 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.934904 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.948516 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.969297 4766 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 13 03:49:54 crc kubenswrapper[4766]: I1213 03:49:54.988654 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.074724 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75459f8749-sm486" Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.085191 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fb5df5559-vkjxg" Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.086635 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.098789 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.172991 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.292305 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.578490 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fb5df5559-vkjxg"] Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.588074 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.725928 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.726604 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.726927 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.727802 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.728149 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.728386 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.728716 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.728771 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.729259 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.729348 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.729609 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.730004 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.733064 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.739448 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.740738 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.742147 4766 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.742500 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.754609 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f461109-9a3d-4f7a-9265-1db9a81122e5" path="/var/lib/kubelet/pods/3f461109-9a3d-4f7a-9265-1db9a81122e5/volumes" Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.756067 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4724b2a-c46a-4d7b-8227-6a15e978dd16" path="/var/lib/kubelet/pods/c4724b2a-c46a-4d7b-8227-6a15e978dd16/volumes" Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.757229 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a" path="/var/lib/kubelet/pods/ce379a6f-ac21-4ca4-8fa9-5ae03be96c6a/volumes" Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.762151 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5fb5df5559-vkjxg" event={"ID":"44f14baf-9051-46b3-8f2b-0f65ef2805cc","Type":"ContainerStarted","Data":"1a66cea2037f034c3ab22026de3b6eca3f982d792bf19528a6b1bf9b22739ca3"} Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.773120 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-f6658f7c8-7lmh4_19086cc5-bade-4f53-90cc-0370b7d2f6a9/oauth-openshift/2.log" Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.775519 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.775907 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-f6658f7c8-7lmh4_19086cc5-bade-4f53-90cc-0370b7d2f6a9/oauth-openshift/1.log" Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.776128 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" event={"ID":"19086cc5-bade-4f53-90cc-0370b7d2f6a9","Type":"ContainerStarted","Data":"772d6638d3366293f44cf5833ab9b59222835d41052e880d10104975edc851e1"} Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.777500 4766 scope.go:117] "RemoveContainer" containerID="772d6638d3366293f44cf5833ab9b59222835d41052e880d10104975edc851e1" Dec 13 03:49:55 crc kubenswrapper[4766]: E1213 03:49:55.777804 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 20s restarting failed container=oauth-openshift pod=oauth-openshift-f6658f7c8-7lmh4_openshift-authentication(19086cc5-bade-4f53-90cc-0370b7d2f6a9)\"" pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" podUID="19086cc5-bade-4f53-90cc-0370b7d2f6a9" Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.784203 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75459f8749-sm486"] Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.837013 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 13 03:49:55 crc kubenswrapper[4766]: I1213 03:49:55.975323 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 13 03:49:56 crc kubenswrapper[4766]: I1213 03:49:56.010700 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 13 03:49:56 crc kubenswrapper[4766]: I1213 03:49:56.031947 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 13 03:49:56 crc kubenswrapper[4766]: I1213 03:49:56.070676 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 13 03:49:56 crc kubenswrapper[4766]: I1213 03:49:56.083273 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 13 03:49:56 crc kubenswrapper[4766]: I1213 03:49:56.083667 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 13 03:49:56 crc kubenswrapper[4766]: I1213 03:49:56.111816 4766 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 13 03:49:56 crc kubenswrapper[4766]: I1213 03:49:56.180933 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 13 03:49:56 crc kubenswrapper[4766]: I1213 03:49:56.242492 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 13 03:49:56 crc kubenswrapper[4766]: I1213 03:49:56.288972 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 13 03:49:56 crc kubenswrapper[4766]: I1213 03:49:56.293764 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 13 03:49:56 crc kubenswrapper[4766]: I1213 03:49:56.319567 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 13 03:49:56 crc kubenswrapper[4766]: I1213 03:49:56.397582 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 13 03:49:56 crc kubenswrapper[4766]: I1213 03:49:56.398882 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 13 03:49:56 crc kubenswrapper[4766]: I1213 03:49:56.591456 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 13 03:49:56 crc kubenswrapper[4766]: I1213 03:49:56.603835 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 13 03:49:56 crc kubenswrapper[4766]: I1213 03:49:56.604998 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 13 03:49:56 crc kubenswrapper[4766]: I1213 03:49:56.783116 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5fb5df5559-vkjxg" event={"ID":"44f14baf-9051-46b3-8f2b-0f65ef2805cc","Type":"ContainerStarted","Data":"c3bc9a670252554f64c1e88f2d904f491a08a1260c11a91f3fabf47a7c427cb9"} Dec 13 03:49:56 crc kubenswrapper[4766]: I1213 03:49:56.783392 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5fb5df5559-vkjxg" Dec 13 03:49:56 crc kubenswrapper[4766]: I1213 03:49:56.784686 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-f6658f7c8-7lmh4_19086cc5-bade-4f53-90cc-0370b7d2f6a9/oauth-openshift/2.log" Dec 13 03:49:56 crc kubenswrapper[4766]: I1213 03:49:56.785775 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-f6658f7c8-7lmh4_19086cc5-bade-4f53-90cc-0370b7d2f6a9/oauth-openshift/1.log" Dec 13 03:49:56 crc kubenswrapper[4766]: I1213 03:49:56.785817 4766 generic.go:334] "Generic (PLEG): container finished" podID="19086cc5-bade-4f53-90cc-0370b7d2f6a9" containerID="772d6638d3366293f44cf5833ab9b59222835d41052e880d10104975edc851e1" exitCode=255 Dec 13 03:49:56 crc kubenswrapper[4766]: I1213 03:49:56.785872 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" event={"ID":"19086cc5-bade-4f53-90cc-0370b7d2f6a9","Type":"ContainerDied","Data":"772d6638d3366293f44cf5833ab9b59222835d41052e880d10104975edc851e1"} Dec 13 03:49:56 crc kubenswrapper[4766]: I1213 03:49:56.785904 4766 scope.go:117] "RemoveContainer" containerID="f967e26eb71eeb9671210b9bc17e837f6d6a6ef09a5f6ed3facdac11c8db4ca3" Dec 13 03:49:56 crc kubenswrapper[4766]: I1213 03:49:56.786541 4766 scope.go:117] "RemoveContainer" containerID="772d6638d3366293f44cf5833ab9b59222835d41052e880d10104975edc851e1" Dec 13 03:49:56 crc kubenswrapper[4766]: E1213 03:49:56.786814 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 20s restarting failed container=oauth-openshift pod=oauth-openshift-f6658f7c8-7lmh4_openshift-authentication(19086cc5-bade-4f53-90cc-0370b7d2f6a9)\"" pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" podUID="19086cc5-bade-4f53-90cc-0370b7d2f6a9" Dec 13 03:49:56 crc kubenswrapper[4766]: I1213 03:49:56.788136 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75459f8749-sm486" event={"ID":"812ac446-aef3-4824-8ea8-7857a0067955","Type":"ContainerStarted","Data":"69308e4944351f3b0304a0af7bb67e70628e41e44913a8009144e895348580eb"} Dec 13 03:49:56 crc kubenswrapper[4766]: I1213 03:49:56.788176 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75459f8749-sm486" event={"ID":"812ac446-aef3-4824-8ea8-7857a0067955","Type":"ContainerStarted","Data":"e1f301fb1b428657ba82d9766cde25f1818aac22432408447b778b5496ffdf41"} Dec 13 03:49:56 crc kubenswrapper[4766]: I1213 03:49:56.789621 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-75459f8749-sm486" Dec 13 03:49:56 crc kubenswrapper[4766]: I1213 03:49:56.795098 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-75459f8749-sm486" Dec 13 03:49:56 crc kubenswrapper[4766]: I1213 03:49:56.802519 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 13 03:49:56 crc kubenswrapper[4766]: I1213 03:49:56.811771 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5fb5df5559-vkjxg" podStartSLOduration=50.81174603 podStartE2EDuration="50.81174603s" podCreationTimestamp="2025-12-13 03:49:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:49:56.809269878 +0000 UTC m=+328.319202852" watchObservedRunningTime="2025-12-13 03:49:56.81174603 +0000 UTC m=+328.321678994" Dec 13 03:49:56 crc kubenswrapper[4766]: I1213 03:49:56.848862 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-75459f8749-sm486" podStartSLOduration=50.848839525 podStartE2EDuration="50.848839525s" podCreationTimestamp="2025-12-13 03:49:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:49:56.848833295 +0000 UTC m=+328.358766269" watchObservedRunningTime="2025-12-13 03:49:56.848839525 +0000 UTC m=+328.358772489" Dec 13 03:49:56 crc kubenswrapper[4766]: I1213 03:49:56.865742 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 13 03:49:56 crc kubenswrapper[4766]: I1213 03:49:56.903695 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 13 03:49:57 crc kubenswrapper[4766]: I1213 03:49:57.141881 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 13 03:49:57 crc kubenswrapper[4766]: I1213 03:49:57.154886 4766 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 13 03:49:57 crc kubenswrapper[4766]: I1213 03:49:57.155225 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://8402de98bd45acbe2043e7d30396adb974048b4794bcdd7ed896493fa7875111" gracePeriod=5 Dec 13 03:49:57 crc kubenswrapper[4766]: I1213 03:49:57.336198 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 13 03:49:57 crc kubenswrapper[4766]: I1213 03:49:57.459645 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 13 03:49:57 crc kubenswrapper[4766]: I1213 03:49:57.470704 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 13 03:49:57 crc kubenswrapper[4766]: I1213 03:49:57.557795 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 13 03:49:57 crc kubenswrapper[4766]: I1213 03:49:57.561101 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 13 03:49:57 crc kubenswrapper[4766]: I1213 03:49:57.630444 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 13 03:49:57 crc kubenswrapper[4766]: I1213 03:49:57.677969 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 13 03:49:57 crc kubenswrapper[4766]: I1213 03:49:57.701621 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 13 03:49:57 crc kubenswrapper[4766]: I1213 03:49:57.712305 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 13 03:49:57 crc kubenswrapper[4766]: I1213 03:49:57.714106 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 13 03:49:57 crc kubenswrapper[4766]: I1213 03:49:57.783836 4766 patch_prober.go:28] interesting pod/route-controller-manager-5fb5df5559-vkjxg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 03:49:57 crc kubenswrapper[4766]: I1213 03:49:57.783939 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5fb5df5559-vkjxg" podUID="44f14baf-9051-46b3-8f2b-0f65ef2805cc" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 03:49:57 crc kubenswrapper[4766]: I1213 03:49:57.797890 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-f6658f7c8-7lmh4_19086cc5-bade-4f53-90cc-0370b7d2f6a9/oauth-openshift/2.log" Dec 13 03:49:57 crc kubenswrapper[4766]: I1213 03:49:57.853885 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 13 03:49:57 crc kubenswrapper[4766]: I1213 03:49:57.901526 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 13 03:49:58 crc kubenswrapper[4766]: I1213 03:49:58.058169 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 13 03:49:58 crc kubenswrapper[4766]: I1213 03:49:58.059719 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 13 03:49:58 crc kubenswrapper[4766]: I1213 03:49:58.173050 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 13 03:49:58 crc kubenswrapper[4766]: I1213 03:49:58.189820 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 13 03:49:58 crc kubenswrapper[4766]: I1213 03:49:58.334311 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 13 03:49:58 crc kubenswrapper[4766]: I1213 03:49:58.350556 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 13 03:49:58 crc kubenswrapper[4766]: I1213 03:49:58.385057 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 13 03:49:58 crc kubenswrapper[4766]: I1213 03:49:58.444073 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 13 03:49:58 crc kubenswrapper[4766]: I1213 03:49:58.513834 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 13 03:49:58 crc kubenswrapper[4766]: I1213 03:49:58.578906 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 13 03:49:58 crc kubenswrapper[4766]: I1213 03:49:58.624104 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 13 03:49:58 crc kubenswrapper[4766]: I1213 03:49:58.657296 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 13 03:49:58 crc kubenswrapper[4766]: I1213 03:49:58.730220 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 13 03:49:58 crc kubenswrapper[4766]: I1213 03:49:58.798760 4766 patch_prober.go:28] interesting pod/route-controller-manager-5fb5df5559-vkjxg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": context deadline exceeded" start-of-body= Dec 13 03:49:58 crc kubenswrapper[4766]: I1213 03:49:58.798837 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5fb5df5559-vkjxg" podUID="44f14baf-9051-46b3-8f2b-0f65ef2805cc" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": context deadline exceeded" Dec 13 03:49:58 crc kubenswrapper[4766]: I1213 03:49:58.825464 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 13 03:49:58 crc kubenswrapper[4766]: I1213 03:49:58.884284 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 13 03:49:58 crc kubenswrapper[4766]: I1213 03:49:58.904836 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 13 03:49:58 crc kubenswrapper[4766]: I1213 03:49:58.970666 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 13 03:49:59 crc kubenswrapper[4766]: I1213 03:49:59.115769 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 13 03:49:59 crc kubenswrapper[4766]: I1213 03:49:59.246999 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 13 03:49:59 crc kubenswrapper[4766]: I1213 03:49:59.303285 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 13 03:49:59 crc kubenswrapper[4766]: I1213 03:49:59.381848 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 13 03:49:59 crc kubenswrapper[4766]: I1213 03:49:59.439153 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 13 03:49:59 crc kubenswrapper[4766]: I1213 03:49:59.671737 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 13 03:49:59 crc kubenswrapper[4766]: I1213 03:49:59.894662 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 13 03:50:00 crc kubenswrapper[4766]: I1213 03:50:00.051328 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 13 03:50:00 crc kubenswrapper[4766]: I1213 03:50:00.129108 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 13 03:50:00 crc kubenswrapper[4766]: I1213 03:50:00.141356 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 13 03:50:00 crc kubenswrapper[4766]: I1213 03:50:00.266941 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 13 03:50:00 crc kubenswrapper[4766]: I1213 03:50:00.658990 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 13 03:50:01 crc kubenswrapper[4766]: I1213 03:50:01.428103 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 13 03:50:01 crc kubenswrapper[4766]: I1213 03:50:01.484042 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 13 03:50:02 crc kubenswrapper[4766]: I1213 03:50:02.734916 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 13 03:50:02 crc kubenswrapper[4766]: I1213 03:50:02.735486 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 03:50:02 crc kubenswrapper[4766]: I1213 03:50:02.830573 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 13 03:50:02 crc kubenswrapper[4766]: I1213 03:50:02.830643 4766 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="8402de98bd45acbe2043e7d30396adb974048b4794bcdd7ed896493fa7875111" exitCode=137 Dec 13 03:50:02 crc kubenswrapper[4766]: I1213 03:50:02.830711 4766 scope.go:117] "RemoveContainer" containerID="8402de98bd45acbe2043e7d30396adb974048b4794bcdd7ed896493fa7875111" Dec 13 03:50:02 crc kubenswrapper[4766]: I1213 03:50:02.830709 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 03:50:02 crc kubenswrapper[4766]: I1213 03:50:02.833311 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 13 03:50:02 crc kubenswrapper[4766]: I1213 03:50:02.833460 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:50:02 crc kubenswrapper[4766]: I1213 03:50:02.833633 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 13 03:50:02 crc kubenswrapper[4766]: I1213 03:50:02.833825 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 13 03:50:02 crc kubenswrapper[4766]: I1213 03:50:02.833649 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:50:02 crc kubenswrapper[4766]: I1213 03:50:02.833897 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:50:02 crc kubenswrapper[4766]: I1213 03:50:02.834270 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 13 03:50:02 crc kubenswrapper[4766]: I1213 03:50:02.834490 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 13 03:50:02 crc kubenswrapper[4766]: I1213 03:50:02.834957 4766 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 13 03:50:02 crc kubenswrapper[4766]: I1213 03:50:02.835289 4766 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 13 03:50:02 crc kubenswrapper[4766]: I1213 03:50:02.834361 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:50:02 crc kubenswrapper[4766]: I1213 03:50:02.835402 4766 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 13 03:50:02 crc kubenswrapper[4766]: I1213 03:50:02.847184 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:50:02 crc kubenswrapper[4766]: I1213 03:50:02.848408 4766 scope.go:117] "RemoveContainer" containerID="8402de98bd45acbe2043e7d30396adb974048b4794bcdd7ed896493fa7875111" Dec 13 03:50:02 crc kubenswrapper[4766]: I1213 03:50:02.854001 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 13 03:50:02 crc kubenswrapper[4766]: E1213 03:50:02.858020 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8402de98bd45acbe2043e7d30396adb974048b4794bcdd7ed896493fa7875111\": container with ID starting with 8402de98bd45acbe2043e7d30396adb974048b4794bcdd7ed896493fa7875111 not found: ID does not exist" containerID="8402de98bd45acbe2043e7d30396adb974048b4794bcdd7ed896493fa7875111" Dec 13 03:50:02 crc kubenswrapper[4766]: I1213 03:50:02.858087 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8402de98bd45acbe2043e7d30396adb974048b4794bcdd7ed896493fa7875111"} err="failed to get container status \"8402de98bd45acbe2043e7d30396adb974048b4794bcdd7ed896493fa7875111\": rpc error: code = NotFound desc = could not find container \"8402de98bd45acbe2043e7d30396adb974048b4794bcdd7ed896493fa7875111\": container with ID starting with 8402de98bd45acbe2043e7d30396adb974048b4794bcdd7ed896493fa7875111 not found: ID does not exist" Dec 13 03:50:02 crc kubenswrapper[4766]: I1213 03:50:02.936930 4766 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 13 03:50:02 crc kubenswrapper[4766]: I1213 03:50:02.936989 4766 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 13 03:50:03 crc kubenswrapper[4766]: I1213 03:50:03.628178 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 13 03:50:04 crc kubenswrapper[4766]: I1213 03:50:04.643117 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:50:04 crc kubenswrapper[4766]: I1213 03:50:04.643494 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:50:04 crc kubenswrapper[4766]: I1213 03:50:04.644196 4766 scope.go:117] "RemoveContainer" containerID="772d6638d3366293f44cf5833ab9b59222835d41052e880d10104975edc851e1" Dec 13 03:50:04 crc kubenswrapper[4766]: E1213 03:50:04.644419 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 20s restarting failed container=oauth-openshift pod=oauth-openshift-f6658f7c8-7lmh4_openshift-authentication(19086cc5-bade-4f53-90cc-0370b7d2f6a9)\"" pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" podUID="19086cc5-bade-4f53-90cc-0370b7d2f6a9" Dec 13 03:50:05 crc kubenswrapper[4766]: I1213 03:50:05.091733 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5fb5df5559-vkjxg" Dec 13 03:50:06 crc kubenswrapper[4766]: I1213 03:50:06.315755 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-75459f8749-sm486"] Dec 13 03:50:06 crc kubenswrapper[4766]: I1213 03:50:06.316682 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-75459f8749-sm486" podUID="812ac446-aef3-4824-8ea8-7857a0067955" containerName="controller-manager" containerID="cri-o://69308e4944351f3b0304a0af7bb67e70628e41e44913a8009144e895348580eb" gracePeriod=30 Dec 13 03:50:06 crc kubenswrapper[4766]: I1213 03:50:06.420289 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fb5df5559-vkjxg"] Dec 13 03:50:06 crc kubenswrapper[4766]: I1213 03:50:06.420595 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5fb5df5559-vkjxg" podUID="44f14baf-9051-46b3-8f2b-0f65ef2805cc" containerName="route-controller-manager" containerID="cri-o://c3bc9a670252554f64c1e88f2d904f491a08a1260c11a91f3fabf47a7c427cb9" gracePeriod=30 Dec 13 03:50:06 crc kubenswrapper[4766]: I1213 03:50:06.760403 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75459f8749-sm486" Dec 13 03:50:06 crc kubenswrapper[4766]: I1213 03:50:06.829939 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fb5df5559-vkjxg" Dec 13 03:50:06 crc kubenswrapper[4766]: I1213 03:50:06.887470 4766 generic.go:334] "Generic (PLEG): container finished" podID="812ac446-aef3-4824-8ea8-7857a0067955" containerID="69308e4944351f3b0304a0af7bb67e70628e41e44913a8009144e895348580eb" exitCode=0 Dec 13 03:50:06 crc kubenswrapper[4766]: I1213 03:50:06.887553 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75459f8749-sm486" Dec 13 03:50:06 crc kubenswrapper[4766]: I1213 03:50:06.887652 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75459f8749-sm486" event={"ID":"812ac446-aef3-4824-8ea8-7857a0067955","Type":"ContainerDied","Data":"69308e4944351f3b0304a0af7bb67e70628e41e44913a8009144e895348580eb"} Dec 13 03:50:06 crc kubenswrapper[4766]: I1213 03:50:06.887696 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75459f8749-sm486" event={"ID":"812ac446-aef3-4824-8ea8-7857a0067955","Type":"ContainerDied","Data":"e1f301fb1b428657ba82d9766cde25f1818aac22432408447b778b5496ffdf41"} Dec 13 03:50:06 crc kubenswrapper[4766]: I1213 03:50:06.887719 4766 scope.go:117] "RemoveContainer" containerID="69308e4944351f3b0304a0af7bb67e70628e41e44913a8009144e895348580eb" Dec 13 03:50:06 crc kubenswrapper[4766]: I1213 03:50:06.892044 4766 generic.go:334] "Generic (PLEG): container finished" podID="44f14baf-9051-46b3-8f2b-0f65ef2805cc" containerID="c3bc9a670252554f64c1e88f2d904f491a08a1260c11a91f3fabf47a7c427cb9" exitCode=0 Dec 13 03:50:06 crc kubenswrapper[4766]: I1213 03:50:06.892105 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5fb5df5559-vkjxg" event={"ID":"44f14baf-9051-46b3-8f2b-0f65ef2805cc","Type":"ContainerDied","Data":"c3bc9a670252554f64c1e88f2d904f491a08a1260c11a91f3fabf47a7c427cb9"} Dec 13 03:50:06 crc kubenswrapper[4766]: I1213 03:50:06.892147 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5fb5df5559-vkjxg" event={"ID":"44f14baf-9051-46b3-8f2b-0f65ef2805cc","Type":"ContainerDied","Data":"1a66cea2037f034c3ab22026de3b6eca3f982d792bf19528a6b1bf9b22739ca3"} Dec 13 03:50:06 crc kubenswrapper[4766]: I1213 03:50:06.892218 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fb5df5559-vkjxg" Dec 13 03:50:06 crc kubenswrapper[4766]: I1213 03:50:06.902030 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/812ac446-aef3-4824-8ea8-7857a0067955-serving-cert\") pod \"812ac446-aef3-4824-8ea8-7857a0067955\" (UID: \"812ac446-aef3-4824-8ea8-7857a0067955\") " Dec 13 03:50:06 crc kubenswrapper[4766]: I1213 03:50:06.902119 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/812ac446-aef3-4824-8ea8-7857a0067955-client-ca\") pod \"812ac446-aef3-4824-8ea8-7857a0067955\" (UID: \"812ac446-aef3-4824-8ea8-7857a0067955\") " Dec 13 03:50:06 crc kubenswrapper[4766]: I1213 03:50:06.902223 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/812ac446-aef3-4824-8ea8-7857a0067955-proxy-ca-bundles\") pod \"812ac446-aef3-4824-8ea8-7857a0067955\" (UID: \"812ac446-aef3-4824-8ea8-7857a0067955\") " Dec 13 03:50:06 crc kubenswrapper[4766]: I1213 03:50:06.902255 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7hdp\" (UniqueName: \"kubernetes.io/projected/812ac446-aef3-4824-8ea8-7857a0067955-kube-api-access-m7hdp\") pod \"812ac446-aef3-4824-8ea8-7857a0067955\" (UID: \"812ac446-aef3-4824-8ea8-7857a0067955\") " Dec 13 03:50:06 crc kubenswrapper[4766]: I1213 03:50:06.902362 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/812ac446-aef3-4824-8ea8-7857a0067955-config\") pod \"812ac446-aef3-4824-8ea8-7857a0067955\" (UID: \"812ac446-aef3-4824-8ea8-7857a0067955\") " Dec 13 03:50:06 crc kubenswrapper[4766]: I1213 03:50:06.903071 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/812ac446-aef3-4824-8ea8-7857a0067955-client-ca" (OuterVolumeSpecName: "client-ca") pod "812ac446-aef3-4824-8ea8-7857a0067955" (UID: "812ac446-aef3-4824-8ea8-7857a0067955"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:50:06 crc kubenswrapper[4766]: I1213 03:50:06.903443 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/812ac446-aef3-4824-8ea8-7857a0067955-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "812ac446-aef3-4824-8ea8-7857a0067955" (UID: "812ac446-aef3-4824-8ea8-7857a0067955"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:50:06 crc kubenswrapper[4766]: I1213 03:50:06.903549 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/812ac446-aef3-4824-8ea8-7857a0067955-config" (OuterVolumeSpecName: "config") pod "812ac446-aef3-4824-8ea8-7857a0067955" (UID: "812ac446-aef3-4824-8ea8-7857a0067955"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:50:06 crc kubenswrapper[4766]: I1213 03:50:06.913664 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/812ac446-aef3-4824-8ea8-7857a0067955-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "812ac446-aef3-4824-8ea8-7857a0067955" (UID: "812ac446-aef3-4824-8ea8-7857a0067955"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:50:06 crc kubenswrapper[4766]: I1213 03:50:06.913708 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/812ac446-aef3-4824-8ea8-7857a0067955-kube-api-access-m7hdp" (OuterVolumeSpecName: "kube-api-access-m7hdp") pod "812ac446-aef3-4824-8ea8-7857a0067955" (UID: "812ac446-aef3-4824-8ea8-7857a0067955"). InnerVolumeSpecName "kube-api-access-m7hdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:50:06 crc kubenswrapper[4766]: I1213 03:50:06.924918 4766 scope.go:117] "RemoveContainer" containerID="69308e4944351f3b0304a0af7bb67e70628e41e44913a8009144e895348580eb" Dec 13 03:50:06 crc kubenswrapper[4766]: E1213 03:50:06.925839 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69308e4944351f3b0304a0af7bb67e70628e41e44913a8009144e895348580eb\": container with ID starting with 69308e4944351f3b0304a0af7bb67e70628e41e44913a8009144e895348580eb not found: ID does not exist" containerID="69308e4944351f3b0304a0af7bb67e70628e41e44913a8009144e895348580eb" Dec 13 03:50:06 crc kubenswrapper[4766]: I1213 03:50:06.925904 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69308e4944351f3b0304a0af7bb67e70628e41e44913a8009144e895348580eb"} err="failed to get container status \"69308e4944351f3b0304a0af7bb67e70628e41e44913a8009144e895348580eb\": rpc error: code = NotFound desc = could not find container \"69308e4944351f3b0304a0af7bb67e70628e41e44913a8009144e895348580eb\": container with ID starting with 69308e4944351f3b0304a0af7bb67e70628e41e44913a8009144e895348580eb not found: ID does not exist" Dec 13 03:50:06 crc kubenswrapper[4766]: I1213 03:50:06.925937 4766 scope.go:117] "RemoveContainer" containerID="c3bc9a670252554f64c1e88f2d904f491a08a1260c11a91f3fabf47a7c427cb9" Dec 13 03:50:06 crc kubenswrapper[4766]: I1213 03:50:06.945303 4766 scope.go:117] "RemoveContainer" containerID="c3bc9a670252554f64c1e88f2d904f491a08a1260c11a91f3fabf47a7c427cb9" Dec 13 03:50:06 crc kubenswrapper[4766]: E1213 03:50:06.945894 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3bc9a670252554f64c1e88f2d904f491a08a1260c11a91f3fabf47a7c427cb9\": container with ID starting with c3bc9a670252554f64c1e88f2d904f491a08a1260c11a91f3fabf47a7c427cb9 not found: ID does not exist" containerID="c3bc9a670252554f64c1e88f2d904f491a08a1260c11a91f3fabf47a7c427cb9" Dec 13 03:50:06 crc kubenswrapper[4766]: I1213 03:50:06.945929 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3bc9a670252554f64c1e88f2d904f491a08a1260c11a91f3fabf47a7c427cb9"} err="failed to get container status \"c3bc9a670252554f64c1e88f2d904f491a08a1260c11a91f3fabf47a7c427cb9\": rpc error: code = NotFound desc = could not find container \"c3bc9a670252554f64c1e88f2d904f491a08a1260c11a91f3fabf47a7c427cb9\": container with ID starting with c3bc9a670252554f64c1e88f2d904f491a08a1260c11a91f3fabf47a7c427cb9 not found: ID does not exist" Dec 13 03:50:07 crc kubenswrapper[4766]: I1213 03:50:07.003870 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44f14baf-9051-46b3-8f2b-0f65ef2805cc-client-ca\") pod \"44f14baf-9051-46b3-8f2b-0f65ef2805cc\" (UID: \"44f14baf-9051-46b3-8f2b-0f65ef2805cc\") " Dec 13 03:50:07 crc kubenswrapper[4766]: I1213 03:50:07.003950 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44f14baf-9051-46b3-8f2b-0f65ef2805cc-serving-cert\") pod \"44f14baf-9051-46b3-8f2b-0f65ef2805cc\" (UID: \"44f14baf-9051-46b3-8f2b-0f65ef2805cc\") " Dec 13 03:50:07 crc kubenswrapper[4766]: I1213 03:50:07.004078 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxqw5\" (UniqueName: \"kubernetes.io/projected/44f14baf-9051-46b3-8f2b-0f65ef2805cc-kube-api-access-wxqw5\") pod \"44f14baf-9051-46b3-8f2b-0f65ef2805cc\" (UID: \"44f14baf-9051-46b3-8f2b-0f65ef2805cc\") " Dec 13 03:50:07 crc kubenswrapper[4766]: I1213 03:50:07.004127 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44f14baf-9051-46b3-8f2b-0f65ef2805cc-config\") pod \"44f14baf-9051-46b3-8f2b-0f65ef2805cc\" (UID: \"44f14baf-9051-46b3-8f2b-0f65ef2805cc\") " Dec 13 03:50:07 crc kubenswrapper[4766]: I1213 03:50:07.004411 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/812ac446-aef3-4824-8ea8-7857a0067955-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:50:07 crc kubenswrapper[4766]: I1213 03:50:07.004443 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/812ac446-aef3-4824-8ea8-7857a0067955-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:50:07 crc kubenswrapper[4766]: I1213 03:50:07.004455 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/812ac446-aef3-4824-8ea8-7857a0067955-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 03:50:07 crc kubenswrapper[4766]: I1213 03:50:07.004467 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/812ac446-aef3-4824-8ea8-7857a0067955-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 13 03:50:07 crc kubenswrapper[4766]: I1213 03:50:07.004484 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7hdp\" (UniqueName: \"kubernetes.io/projected/812ac446-aef3-4824-8ea8-7857a0067955-kube-api-access-m7hdp\") on node \"crc\" DevicePath \"\"" Dec 13 03:50:07 crc kubenswrapper[4766]: I1213 03:50:07.004688 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44f14baf-9051-46b3-8f2b-0f65ef2805cc-client-ca" (OuterVolumeSpecName: "client-ca") pod "44f14baf-9051-46b3-8f2b-0f65ef2805cc" (UID: "44f14baf-9051-46b3-8f2b-0f65ef2805cc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:50:07 crc kubenswrapper[4766]: I1213 03:50:07.004800 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44f14baf-9051-46b3-8f2b-0f65ef2805cc-config" (OuterVolumeSpecName: "config") pod "44f14baf-9051-46b3-8f2b-0f65ef2805cc" (UID: "44f14baf-9051-46b3-8f2b-0f65ef2805cc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:50:07 crc kubenswrapper[4766]: I1213 03:50:07.006998 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44f14baf-9051-46b3-8f2b-0f65ef2805cc-kube-api-access-wxqw5" (OuterVolumeSpecName: "kube-api-access-wxqw5") pod "44f14baf-9051-46b3-8f2b-0f65ef2805cc" (UID: "44f14baf-9051-46b3-8f2b-0f65ef2805cc"). InnerVolumeSpecName "kube-api-access-wxqw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:50:07 crc kubenswrapper[4766]: I1213 03:50:07.007528 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44f14baf-9051-46b3-8f2b-0f65ef2805cc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "44f14baf-9051-46b3-8f2b-0f65ef2805cc" (UID: "44f14baf-9051-46b3-8f2b-0f65ef2805cc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:50:07 crc kubenswrapper[4766]: I1213 03:50:07.105453 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44f14baf-9051-46b3-8f2b-0f65ef2805cc-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 03:50:07 crc kubenswrapper[4766]: I1213 03:50:07.105716 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44f14baf-9051-46b3-8f2b-0f65ef2805cc-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:50:07 crc kubenswrapper[4766]: I1213 03:50:07.105730 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxqw5\" (UniqueName: \"kubernetes.io/projected/44f14baf-9051-46b3-8f2b-0f65ef2805cc-kube-api-access-wxqw5\") on node \"crc\" DevicePath \"\"" Dec 13 03:50:07 crc kubenswrapper[4766]: I1213 03:50:07.105741 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44f14baf-9051-46b3-8f2b-0f65ef2805cc-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:50:07 crc kubenswrapper[4766]: I1213 03:50:07.214962 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-75459f8749-sm486"] Dec 13 03:50:07 crc kubenswrapper[4766]: I1213 03:50:07.220364 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-75459f8749-sm486"] Dec 13 03:50:07 crc kubenswrapper[4766]: I1213 03:50:07.227037 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fb5df5559-vkjxg"] Dec 13 03:50:07 crc kubenswrapper[4766]: I1213 03:50:07.230365 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fb5df5559-vkjxg"] Dec 13 03:50:07 crc kubenswrapper[4766]: I1213 03:50:07.623725 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44f14baf-9051-46b3-8f2b-0f65ef2805cc" path="/var/lib/kubelet/pods/44f14baf-9051-46b3-8f2b-0f65ef2805cc/volumes" Dec 13 03:50:07 crc kubenswrapper[4766]: I1213 03:50:07.624301 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="812ac446-aef3-4824-8ea8-7857a0067955" path="/var/lib/kubelet/pods/812ac446-aef3-4824-8ea8-7857a0067955/volumes" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.225400 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-df9db78f9-mbl2c"] Dec 13 03:50:08 crc kubenswrapper[4766]: E1213 03:50:08.225977 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44f14baf-9051-46b3-8f2b-0f65ef2805cc" containerName="route-controller-manager" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.226016 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="44f14baf-9051-46b3-8f2b-0f65ef2805cc" containerName="route-controller-manager" Dec 13 03:50:08 crc kubenswrapper[4766]: E1213 03:50:08.226063 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.226076 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 13 03:50:08 crc kubenswrapper[4766]: E1213 03:50:08.226100 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812ac446-aef3-4824-8ea8-7857a0067955" containerName="controller-manager" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.226113 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="812ac446-aef3-4824-8ea8-7857a0067955" containerName="controller-manager" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.226315 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="812ac446-aef3-4824-8ea8-7857a0067955" containerName="controller-manager" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.226357 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.226369 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="44f14baf-9051-46b3-8f2b-0f65ef2805cc" containerName="route-controller-manager" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.227231 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-mbl2c" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.230071 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-747d6fd8f4-zpxwb"] Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.230944 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-747d6fd8f4-zpxwb" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.233019 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.233257 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.233764 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.234931 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.236208 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.236499 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.236633 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.236923 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.237064 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.237113 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.237230 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.237282 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.244379 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.254937 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-df9db78f9-mbl2c"] Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.264996 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-747d6fd8f4-zpxwb"] Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.322478 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbtv2\" (UniqueName: \"kubernetes.io/projected/b6b83012-bc43-4de8-b840-102036db4b47-kube-api-access-lbtv2\") pod \"route-controller-manager-df9db78f9-mbl2c\" (UID: \"b6b83012-bc43-4de8-b840-102036db4b47\") " pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-mbl2c" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.322617 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4640cb2b-933c-4203-b4f4-43e7d4b674e3-serving-cert\") pod \"controller-manager-747d6fd8f4-zpxwb\" (UID: \"4640cb2b-933c-4203-b4f4-43e7d4b674e3\") " pod="openshift-controller-manager/controller-manager-747d6fd8f4-zpxwb" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.322659 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25v9r\" (UniqueName: \"kubernetes.io/projected/4640cb2b-933c-4203-b4f4-43e7d4b674e3-kube-api-access-25v9r\") pod \"controller-manager-747d6fd8f4-zpxwb\" (UID: \"4640cb2b-933c-4203-b4f4-43e7d4b674e3\") " pod="openshift-controller-manager/controller-manager-747d6fd8f4-zpxwb" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.322810 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4640cb2b-933c-4203-b4f4-43e7d4b674e3-client-ca\") pod \"controller-manager-747d6fd8f4-zpxwb\" (UID: \"4640cb2b-933c-4203-b4f4-43e7d4b674e3\") " pod="openshift-controller-manager/controller-manager-747d6fd8f4-zpxwb" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.322893 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4640cb2b-933c-4203-b4f4-43e7d4b674e3-config\") pod \"controller-manager-747d6fd8f4-zpxwb\" (UID: \"4640cb2b-933c-4203-b4f4-43e7d4b674e3\") " pod="openshift-controller-manager/controller-manager-747d6fd8f4-zpxwb" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.322964 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6b83012-bc43-4de8-b840-102036db4b47-client-ca\") pod \"route-controller-manager-df9db78f9-mbl2c\" (UID: \"b6b83012-bc43-4de8-b840-102036db4b47\") " pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-mbl2c" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.322994 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6b83012-bc43-4de8-b840-102036db4b47-serving-cert\") pod \"route-controller-manager-df9db78f9-mbl2c\" (UID: \"b6b83012-bc43-4de8-b840-102036db4b47\") " pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-mbl2c" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.323033 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4640cb2b-933c-4203-b4f4-43e7d4b674e3-proxy-ca-bundles\") pod \"controller-manager-747d6fd8f4-zpxwb\" (UID: \"4640cb2b-933c-4203-b4f4-43e7d4b674e3\") " pod="openshift-controller-manager/controller-manager-747d6fd8f4-zpxwb" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.323110 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6b83012-bc43-4de8-b840-102036db4b47-config\") pod \"route-controller-manager-df9db78f9-mbl2c\" (UID: \"b6b83012-bc43-4de8-b840-102036db4b47\") " pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-mbl2c" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.425072 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbtv2\" (UniqueName: \"kubernetes.io/projected/b6b83012-bc43-4de8-b840-102036db4b47-kube-api-access-lbtv2\") pod \"route-controller-manager-df9db78f9-mbl2c\" (UID: \"b6b83012-bc43-4de8-b840-102036db4b47\") " pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-mbl2c" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.425153 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4640cb2b-933c-4203-b4f4-43e7d4b674e3-serving-cert\") pod \"controller-manager-747d6fd8f4-zpxwb\" (UID: \"4640cb2b-933c-4203-b4f4-43e7d4b674e3\") " pod="openshift-controller-manager/controller-manager-747d6fd8f4-zpxwb" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.425224 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25v9r\" (UniqueName: \"kubernetes.io/projected/4640cb2b-933c-4203-b4f4-43e7d4b674e3-kube-api-access-25v9r\") pod \"controller-manager-747d6fd8f4-zpxwb\" (UID: \"4640cb2b-933c-4203-b4f4-43e7d4b674e3\") " pod="openshift-controller-manager/controller-manager-747d6fd8f4-zpxwb" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.427412 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4640cb2b-933c-4203-b4f4-43e7d4b674e3-client-ca\") pod \"controller-manager-747d6fd8f4-zpxwb\" (UID: \"4640cb2b-933c-4203-b4f4-43e7d4b674e3\") " pod="openshift-controller-manager/controller-manager-747d6fd8f4-zpxwb" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.427638 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4640cb2b-933c-4203-b4f4-43e7d4b674e3-config\") pod \"controller-manager-747d6fd8f4-zpxwb\" (UID: \"4640cb2b-933c-4203-b4f4-43e7d4b674e3\") " pod="openshift-controller-manager/controller-manager-747d6fd8f4-zpxwb" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.427824 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6b83012-bc43-4de8-b840-102036db4b47-client-ca\") pod \"route-controller-manager-df9db78f9-mbl2c\" (UID: \"b6b83012-bc43-4de8-b840-102036db4b47\") " pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-mbl2c" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.427971 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6b83012-bc43-4de8-b840-102036db4b47-serving-cert\") pod \"route-controller-manager-df9db78f9-mbl2c\" (UID: \"b6b83012-bc43-4de8-b840-102036db4b47\") " pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-mbl2c" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.428288 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4640cb2b-933c-4203-b4f4-43e7d4b674e3-proxy-ca-bundles\") pod \"controller-manager-747d6fd8f4-zpxwb\" (UID: \"4640cb2b-933c-4203-b4f4-43e7d4b674e3\") " pod="openshift-controller-manager/controller-manager-747d6fd8f4-zpxwb" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.428654 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6b83012-bc43-4de8-b840-102036db4b47-config\") pod \"route-controller-manager-df9db78f9-mbl2c\" (UID: \"b6b83012-bc43-4de8-b840-102036db4b47\") " pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-mbl2c" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.428931 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6b83012-bc43-4de8-b840-102036db4b47-client-ca\") pod \"route-controller-manager-df9db78f9-mbl2c\" (UID: \"b6b83012-bc43-4de8-b840-102036db4b47\") " pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-mbl2c" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.429233 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4640cb2b-933c-4203-b4f4-43e7d4b674e3-client-ca\") pod \"controller-manager-747d6fd8f4-zpxwb\" (UID: \"4640cb2b-933c-4203-b4f4-43e7d4b674e3\") " pod="openshift-controller-manager/controller-manager-747d6fd8f4-zpxwb" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.429496 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4640cb2b-933c-4203-b4f4-43e7d4b674e3-config\") pod \"controller-manager-747d6fd8f4-zpxwb\" (UID: \"4640cb2b-933c-4203-b4f4-43e7d4b674e3\") " pod="openshift-controller-manager/controller-manager-747d6fd8f4-zpxwb" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.430007 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6b83012-bc43-4de8-b840-102036db4b47-config\") pod \"route-controller-manager-df9db78f9-mbl2c\" (UID: \"b6b83012-bc43-4de8-b840-102036db4b47\") " pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-mbl2c" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.430820 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4640cb2b-933c-4203-b4f4-43e7d4b674e3-proxy-ca-bundles\") pod \"controller-manager-747d6fd8f4-zpxwb\" (UID: \"4640cb2b-933c-4203-b4f4-43e7d4b674e3\") " pod="openshift-controller-manager/controller-manager-747d6fd8f4-zpxwb" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.435253 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4640cb2b-933c-4203-b4f4-43e7d4b674e3-serving-cert\") pod \"controller-manager-747d6fd8f4-zpxwb\" (UID: \"4640cb2b-933c-4203-b4f4-43e7d4b674e3\") " pod="openshift-controller-manager/controller-manager-747d6fd8f4-zpxwb" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.435954 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6b83012-bc43-4de8-b840-102036db4b47-serving-cert\") pod \"route-controller-manager-df9db78f9-mbl2c\" (UID: \"b6b83012-bc43-4de8-b840-102036db4b47\") " pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-mbl2c" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.444897 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25v9r\" (UniqueName: \"kubernetes.io/projected/4640cb2b-933c-4203-b4f4-43e7d4b674e3-kube-api-access-25v9r\") pod \"controller-manager-747d6fd8f4-zpxwb\" (UID: \"4640cb2b-933c-4203-b4f4-43e7d4b674e3\") " pod="openshift-controller-manager/controller-manager-747d6fd8f4-zpxwb" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.446811 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbtv2\" (UniqueName: \"kubernetes.io/projected/b6b83012-bc43-4de8-b840-102036db4b47-kube-api-access-lbtv2\") pod \"route-controller-manager-df9db78f9-mbl2c\" (UID: \"b6b83012-bc43-4de8-b840-102036db4b47\") " pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-mbl2c" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.567074 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-mbl2c" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.574228 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-747d6fd8f4-zpxwb" Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.809280 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-df9db78f9-mbl2c"] Dec 13 03:50:08 crc kubenswrapper[4766]: W1213 03:50:08.819283 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6b83012_bc43_4de8_b840_102036db4b47.slice/crio-e775b7113b68285355f86b6bf215e6cf9eb7bfe6084b15ef4861df84a0724729 WatchSource:0}: Error finding container e775b7113b68285355f86b6bf215e6cf9eb7bfe6084b15ef4861df84a0724729: Status 404 returned error can't find the container with id e775b7113b68285355f86b6bf215e6cf9eb7bfe6084b15ef4861df84a0724729 Dec 13 03:50:08 crc kubenswrapper[4766]: I1213 03:50:08.908905 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-mbl2c" event={"ID":"b6b83012-bc43-4de8-b840-102036db4b47","Type":"ContainerStarted","Data":"e775b7113b68285355f86b6bf215e6cf9eb7bfe6084b15ef4861df84a0724729"} Dec 13 03:50:09 crc kubenswrapper[4766]: I1213 03:50:09.004168 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-747d6fd8f4-zpxwb"] Dec 13 03:50:09 crc kubenswrapper[4766]: W1213 03:50:09.012186 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4640cb2b_933c_4203_b4f4_43e7d4b674e3.slice/crio-cfa17077c52c22cc03eedfb9b57f49e1f75c94e4c4d47756e6cb96b4c23941e2 WatchSource:0}: Error finding container cfa17077c52c22cc03eedfb9b57f49e1f75c94e4c4d47756e6cb96b4c23941e2: Status 404 returned error can't find the container with id cfa17077c52c22cc03eedfb9b57f49e1f75c94e4c4d47756e6cb96b4c23941e2 Dec 13 03:50:09 crc kubenswrapper[4766]: I1213 03:50:09.917725 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-747d6fd8f4-zpxwb" event={"ID":"4640cb2b-933c-4203-b4f4-43e7d4b674e3","Type":"ContainerStarted","Data":"21146bbac69008928f44ea6815e24a41f4e53b8568893580295c9e48d3377cb6"} Dec 13 03:50:09 crc kubenswrapper[4766]: I1213 03:50:09.918116 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-747d6fd8f4-zpxwb" event={"ID":"4640cb2b-933c-4203-b4f4-43e7d4b674e3","Type":"ContainerStarted","Data":"cfa17077c52c22cc03eedfb9b57f49e1f75c94e4c4d47756e6cb96b4c23941e2"} Dec 13 03:50:09 crc kubenswrapper[4766]: I1213 03:50:09.918144 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-747d6fd8f4-zpxwb" Dec 13 03:50:09 crc kubenswrapper[4766]: I1213 03:50:09.920929 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-mbl2c" event={"ID":"b6b83012-bc43-4de8-b840-102036db4b47","Type":"ContainerStarted","Data":"27b51c01130322616e224de7288cbf4408ccb66573018c4015a55556bbef7046"} Dec 13 03:50:09 crc kubenswrapper[4766]: I1213 03:50:09.921173 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-mbl2c" Dec 13 03:50:09 crc kubenswrapper[4766]: I1213 03:50:09.926371 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-747d6fd8f4-zpxwb" Dec 13 03:50:09 crc kubenswrapper[4766]: I1213 03:50:09.928613 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-mbl2c" Dec 13 03:50:09 crc kubenswrapper[4766]: I1213 03:50:09.938898 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-747d6fd8f4-zpxwb" podStartSLOduration=3.938877955 podStartE2EDuration="3.938877955s" podCreationTimestamp="2025-12-13 03:50:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:50:09.937732172 +0000 UTC m=+341.447665146" watchObservedRunningTime="2025-12-13 03:50:09.938877955 +0000 UTC m=+341.448810919" Dec 13 03:50:09 crc kubenswrapper[4766]: I1213 03:50:09.980499 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-mbl2c" podStartSLOduration=3.980472602 podStartE2EDuration="3.980472602s" podCreationTimestamp="2025-12-13 03:50:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:50:09.976685291 +0000 UTC m=+341.486618275" watchObservedRunningTime="2025-12-13 03:50:09.980472602 +0000 UTC m=+341.490405576" Dec 13 03:50:16 crc kubenswrapper[4766]: I1213 03:50:16.616908 4766 scope.go:117] "RemoveContainer" containerID="772d6638d3366293f44cf5833ab9b59222835d41052e880d10104975edc851e1" Dec 13 03:50:17 crc kubenswrapper[4766]: I1213 03:50:17.062113 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-f6658f7c8-7lmh4_19086cc5-bade-4f53-90cc-0370b7d2f6a9/oauth-openshift/2.log" Dec 13 03:50:17 crc kubenswrapper[4766]: I1213 03:50:17.062545 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" event={"ID":"19086cc5-bade-4f53-90cc-0370b7d2f6a9","Type":"ContainerStarted","Data":"9a6248d4bec723e2f8545e128d66c79b89d4fc9df5c1955a59ffee7d3199ab79"} Dec 13 03:50:17 crc kubenswrapper[4766]: I1213 03:50:17.099099 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" podStartSLOduration=102.099077942 podStartE2EDuration="1m42.099077942s" podCreationTimestamp="2025-12-13 03:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:49:35.557483482 +0000 UTC m=+307.067416446" watchObservedRunningTime="2025-12-13 03:50:17.099077942 +0000 UTC m=+348.609010906" Dec 13 03:50:24 crc kubenswrapper[4766]: I1213 03:50:24.643296 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:50:24 crc kubenswrapper[4766]: I1213 03:50:24.651917 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-f6658f7c8-7lmh4" Dec 13 03:50:26 crc kubenswrapper[4766]: I1213 03:50:26.307756 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-747d6fd8f4-zpxwb"] Dec 13 03:50:26 crc kubenswrapper[4766]: I1213 03:50:26.308418 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-747d6fd8f4-zpxwb" podUID="4640cb2b-933c-4203-b4f4-43e7d4b674e3" containerName="controller-manager" containerID="cri-o://21146bbac69008928f44ea6815e24a41f4e53b8568893580295c9e48d3377cb6" gracePeriod=30 Dec 13 03:50:26 crc kubenswrapper[4766]: I1213 03:50:26.326545 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-df9db78f9-mbl2c"] Dec 13 03:50:26 crc kubenswrapper[4766]: I1213 03:50:26.326834 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-mbl2c" podUID="b6b83012-bc43-4de8-b840-102036db4b47" containerName="route-controller-manager" containerID="cri-o://27b51c01130322616e224de7288cbf4408ccb66573018c4015a55556bbef7046" gracePeriod=30 Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.028756 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-mbl2c" Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.083752 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-747d6fd8f4-zpxwb" Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.132351 4766 generic.go:334] "Generic (PLEG): container finished" podID="4640cb2b-933c-4203-b4f4-43e7d4b674e3" containerID="21146bbac69008928f44ea6815e24a41f4e53b8568893580295c9e48d3377cb6" exitCode=0 Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.132462 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-747d6fd8f4-zpxwb" event={"ID":"4640cb2b-933c-4203-b4f4-43e7d4b674e3","Type":"ContainerDied","Data":"21146bbac69008928f44ea6815e24a41f4e53b8568893580295c9e48d3377cb6"} Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.132499 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-747d6fd8f4-zpxwb" event={"ID":"4640cb2b-933c-4203-b4f4-43e7d4b674e3","Type":"ContainerDied","Data":"cfa17077c52c22cc03eedfb9b57f49e1f75c94e4c4d47756e6cb96b4c23941e2"} Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.132527 4766 scope.go:117] "RemoveContainer" containerID="21146bbac69008928f44ea6815e24a41f4e53b8568893580295c9e48d3377cb6" Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.132653 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-747d6fd8f4-zpxwb" Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.135365 4766 generic.go:334] "Generic (PLEG): container finished" podID="b6b83012-bc43-4de8-b840-102036db4b47" containerID="27b51c01130322616e224de7288cbf4408ccb66573018c4015a55556bbef7046" exitCode=0 Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.135402 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-mbl2c" Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.135409 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-mbl2c" event={"ID":"b6b83012-bc43-4de8-b840-102036db4b47","Type":"ContainerDied","Data":"27b51c01130322616e224de7288cbf4408ccb66573018c4015a55556bbef7046"} Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.135898 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-mbl2c" event={"ID":"b6b83012-bc43-4de8-b840-102036db4b47","Type":"ContainerDied","Data":"e775b7113b68285355f86b6bf215e6cf9eb7bfe6084b15ef4861df84a0724729"} Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.157717 4766 scope.go:117] "RemoveContainer" containerID="21146bbac69008928f44ea6815e24a41f4e53b8568893580295c9e48d3377cb6" Dec 13 03:50:27 crc kubenswrapper[4766]: E1213 03:50:27.158213 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21146bbac69008928f44ea6815e24a41f4e53b8568893580295c9e48d3377cb6\": container with ID starting with 21146bbac69008928f44ea6815e24a41f4e53b8568893580295c9e48d3377cb6 not found: ID does not exist" containerID="21146bbac69008928f44ea6815e24a41f4e53b8568893580295c9e48d3377cb6" Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.158270 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21146bbac69008928f44ea6815e24a41f4e53b8568893580295c9e48d3377cb6"} err="failed to get container status \"21146bbac69008928f44ea6815e24a41f4e53b8568893580295c9e48d3377cb6\": rpc error: code = NotFound desc = could not find container \"21146bbac69008928f44ea6815e24a41f4e53b8568893580295c9e48d3377cb6\": container with ID starting with 21146bbac69008928f44ea6815e24a41f4e53b8568893580295c9e48d3377cb6 not found: ID does not exist" Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.158308 4766 scope.go:117] "RemoveContainer" containerID="27b51c01130322616e224de7288cbf4408ccb66573018c4015a55556bbef7046" Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.173522 4766 scope.go:117] "RemoveContainer" containerID="27b51c01130322616e224de7288cbf4408ccb66573018c4015a55556bbef7046" Dec 13 03:50:27 crc kubenswrapper[4766]: E1213 03:50:27.173975 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27b51c01130322616e224de7288cbf4408ccb66573018c4015a55556bbef7046\": container with ID starting with 27b51c01130322616e224de7288cbf4408ccb66573018c4015a55556bbef7046 not found: ID does not exist" containerID="27b51c01130322616e224de7288cbf4408ccb66573018c4015a55556bbef7046" Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.174025 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27b51c01130322616e224de7288cbf4408ccb66573018c4015a55556bbef7046"} err="failed to get container status \"27b51c01130322616e224de7288cbf4408ccb66573018c4015a55556bbef7046\": rpc error: code = NotFound desc = could not find container \"27b51c01130322616e224de7288cbf4408ccb66573018c4015a55556bbef7046\": container with ID starting with 27b51c01130322616e224de7288cbf4408ccb66573018c4015a55556bbef7046 not found: ID does not exist" Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.212573 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6b83012-bc43-4de8-b840-102036db4b47-serving-cert\") pod \"b6b83012-bc43-4de8-b840-102036db4b47\" (UID: \"b6b83012-bc43-4de8-b840-102036db4b47\") " Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.212622 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4640cb2b-933c-4203-b4f4-43e7d4b674e3-serving-cert\") pod \"4640cb2b-933c-4203-b4f4-43e7d4b674e3\" (UID: \"4640cb2b-933c-4203-b4f4-43e7d4b674e3\") " Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.212673 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4640cb2b-933c-4203-b4f4-43e7d4b674e3-proxy-ca-bundles\") pod \"4640cb2b-933c-4203-b4f4-43e7d4b674e3\" (UID: \"4640cb2b-933c-4203-b4f4-43e7d4b674e3\") " Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.212709 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6b83012-bc43-4de8-b840-102036db4b47-config\") pod \"b6b83012-bc43-4de8-b840-102036db4b47\" (UID: \"b6b83012-bc43-4de8-b840-102036db4b47\") " Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.212747 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25v9r\" (UniqueName: \"kubernetes.io/projected/4640cb2b-933c-4203-b4f4-43e7d4b674e3-kube-api-access-25v9r\") pod \"4640cb2b-933c-4203-b4f4-43e7d4b674e3\" (UID: \"4640cb2b-933c-4203-b4f4-43e7d4b674e3\") " Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.212765 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4640cb2b-933c-4203-b4f4-43e7d4b674e3-config\") pod \"4640cb2b-933c-4203-b4f4-43e7d4b674e3\" (UID: \"4640cb2b-933c-4203-b4f4-43e7d4b674e3\") " Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.212794 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4640cb2b-933c-4203-b4f4-43e7d4b674e3-client-ca\") pod \"4640cb2b-933c-4203-b4f4-43e7d4b674e3\" (UID: \"4640cb2b-933c-4203-b4f4-43e7d4b674e3\") " Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.212833 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbtv2\" (UniqueName: \"kubernetes.io/projected/b6b83012-bc43-4de8-b840-102036db4b47-kube-api-access-lbtv2\") pod \"b6b83012-bc43-4de8-b840-102036db4b47\" (UID: \"b6b83012-bc43-4de8-b840-102036db4b47\") " Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.212881 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6b83012-bc43-4de8-b840-102036db4b47-client-ca\") pod \"b6b83012-bc43-4de8-b840-102036db4b47\" (UID: \"b6b83012-bc43-4de8-b840-102036db4b47\") " Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.214177 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4640cb2b-933c-4203-b4f4-43e7d4b674e3-client-ca" (OuterVolumeSpecName: "client-ca") pod "4640cb2b-933c-4203-b4f4-43e7d4b674e3" (UID: "4640cb2b-933c-4203-b4f4-43e7d4b674e3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.214192 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6b83012-bc43-4de8-b840-102036db4b47-config" (OuterVolumeSpecName: "config") pod "b6b83012-bc43-4de8-b840-102036db4b47" (UID: "b6b83012-bc43-4de8-b840-102036db4b47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.214249 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4640cb2b-933c-4203-b4f4-43e7d4b674e3-config" (OuterVolumeSpecName: "config") pod "4640cb2b-933c-4203-b4f4-43e7d4b674e3" (UID: "4640cb2b-933c-4203-b4f4-43e7d4b674e3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.214750 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4640cb2b-933c-4203-b4f4-43e7d4b674e3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4640cb2b-933c-4203-b4f4-43e7d4b674e3" (UID: "4640cb2b-933c-4203-b4f4-43e7d4b674e3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.214918 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6b83012-bc43-4de8-b840-102036db4b47-client-ca" (OuterVolumeSpecName: "client-ca") pod "b6b83012-bc43-4de8-b840-102036db4b47" (UID: "b6b83012-bc43-4de8-b840-102036db4b47"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.218238 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6b83012-bc43-4de8-b840-102036db4b47-kube-api-access-lbtv2" (OuterVolumeSpecName: "kube-api-access-lbtv2") pod "b6b83012-bc43-4de8-b840-102036db4b47" (UID: "b6b83012-bc43-4de8-b840-102036db4b47"). InnerVolumeSpecName "kube-api-access-lbtv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.218361 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6b83012-bc43-4de8-b840-102036db4b47-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b6b83012-bc43-4de8-b840-102036db4b47" (UID: "b6b83012-bc43-4de8-b840-102036db4b47"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.218563 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4640cb2b-933c-4203-b4f4-43e7d4b674e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4640cb2b-933c-4203-b4f4-43e7d4b674e3" (UID: "4640cb2b-933c-4203-b4f4-43e7d4b674e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.218693 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4640cb2b-933c-4203-b4f4-43e7d4b674e3-kube-api-access-25v9r" (OuterVolumeSpecName: "kube-api-access-25v9r") pod "4640cb2b-933c-4203-b4f4-43e7d4b674e3" (UID: "4640cb2b-933c-4203-b4f4-43e7d4b674e3"). InnerVolumeSpecName "kube-api-access-25v9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.314826 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6b83012-bc43-4de8-b840-102036db4b47-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.314877 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25v9r\" (UniqueName: \"kubernetes.io/projected/4640cb2b-933c-4203-b4f4-43e7d4b674e3-kube-api-access-25v9r\") on node \"crc\" DevicePath \"\"" Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.314890 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4640cb2b-933c-4203-b4f4-43e7d4b674e3-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.314903 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4640cb2b-933c-4203-b4f4-43e7d4b674e3-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.314912 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbtv2\" (UniqueName: \"kubernetes.io/projected/b6b83012-bc43-4de8-b840-102036db4b47-kube-api-access-lbtv2\") on node \"crc\" DevicePath \"\"" Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.314921 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6b83012-bc43-4de8-b840-102036db4b47-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.314936 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6b83012-bc43-4de8-b840-102036db4b47-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.314951 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4640cb2b-933c-4203-b4f4-43e7d4b674e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.314967 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4640cb2b-933c-4203-b4f4-43e7d4b674e3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.480653 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-df9db78f9-mbl2c"] Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.483482 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-df9db78f9-mbl2c"] Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.495097 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-747d6fd8f4-zpxwb"] Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.498374 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-747d6fd8f4-zpxwb"] Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.625631 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4640cb2b-933c-4203-b4f4-43e7d4b674e3" path="/var/lib/kubelet/pods/4640cb2b-933c-4203-b4f4-43e7d4b674e3/volumes" Dec 13 03:50:27 crc kubenswrapper[4766]: I1213 03:50:27.626194 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6b83012-bc43-4de8-b840-102036db4b47" path="/var/lib/kubelet/pods/b6b83012-bc43-4de8-b840-102036db4b47/volumes" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.234207 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-sdb2l"] Dec 13 03:50:28 crc kubenswrapper[4766]: E1213 03:50:28.234706 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b83012-bc43-4de8-b840-102036db4b47" containerName="route-controller-manager" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.234743 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b83012-bc43-4de8-b840-102036db4b47" containerName="route-controller-manager" Dec 13 03:50:28 crc kubenswrapper[4766]: E1213 03:50:28.234768 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4640cb2b-933c-4203-b4f4-43e7d4b674e3" containerName="controller-manager" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.234776 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4640cb2b-933c-4203-b4f4-43e7d4b674e3" containerName="controller-manager" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.234937 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6b83012-bc43-4de8-b840-102036db4b47" containerName="route-controller-manager" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.234958 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4640cb2b-933c-4203-b4f4-43e7d4b674e3" containerName="controller-manager" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.235516 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d6f97d578-sdb2l" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.237620 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.238404 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.238404 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.238600 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.239172 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cc7864974-f4wvd"] Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.239918 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-f4wvd" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.244003 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.244208 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.244231 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.244518 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.244801 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.244870 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.245231 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.248576 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.250979 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.251599 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-sdb2l"] Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.275463 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cc7864974-f4wvd"] Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.429491 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12cb9a34-612d-41ed-83d2-6c9cec8d13cf-client-ca\") pod \"route-controller-manager-7cc7864974-f4wvd\" (UID: \"12cb9a34-612d-41ed-83d2-6c9cec8d13cf\") " pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-f4wvd" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.429583 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32825ea0-7e19-4846-8c42-7b4fb6b89c03-client-ca\") pod \"controller-manager-d6f97d578-sdb2l\" (UID: \"32825ea0-7e19-4846-8c42-7b4fb6b89c03\") " pod="openshift-controller-manager/controller-manager-d6f97d578-sdb2l" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.429692 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12cb9a34-612d-41ed-83d2-6c9cec8d13cf-config\") pod \"route-controller-manager-7cc7864974-f4wvd\" (UID: \"12cb9a34-612d-41ed-83d2-6c9cec8d13cf\") " pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-f4wvd" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.429720 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32825ea0-7e19-4846-8c42-7b4fb6b89c03-proxy-ca-bundles\") pod \"controller-manager-d6f97d578-sdb2l\" (UID: \"32825ea0-7e19-4846-8c42-7b4fb6b89c03\") " pod="openshift-controller-manager/controller-manager-d6f97d578-sdb2l" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.429750 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmmvw\" (UniqueName: \"kubernetes.io/projected/32825ea0-7e19-4846-8c42-7b4fb6b89c03-kube-api-access-vmmvw\") pod \"controller-manager-d6f97d578-sdb2l\" (UID: \"32825ea0-7e19-4846-8c42-7b4fb6b89c03\") " pod="openshift-controller-manager/controller-manager-d6f97d578-sdb2l" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.429777 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32825ea0-7e19-4846-8c42-7b4fb6b89c03-config\") pod \"controller-manager-d6f97d578-sdb2l\" (UID: \"32825ea0-7e19-4846-8c42-7b4fb6b89c03\") " pod="openshift-controller-manager/controller-manager-d6f97d578-sdb2l" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.429801 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32825ea0-7e19-4846-8c42-7b4fb6b89c03-serving-cert\") pod \"controller-manager-d6f97d578-sdb2l\" (UID: \"32825ea0-7e19-4846-8c42-7b4fb6b89c03\") " pod="openshift-controller-manager/controller-manager-d6f97d578-sdb2l" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.429826 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkqcf\" (UniqueName: \"kubernetes.io/projected/12cb9a34-612d-41ed-83d2-6c9cec8d13cf-kube-api-access-rkqcf\") pod \"route-controller-manager-7cc7864974-f4wvd\" (UID: \"12cb9a34-612d-41ed-83d2-6c9cec8d13cf\") " pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-f4wvd" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.429844 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12cb9a34-612d-41ed-83d2-6c9cec8d13cf-serving-cert\") pod \"route-controller-manager-7cc7864974-f4wvd\" (UID: \"12cb9a34-612d-41ed-83d2-6c9cec8d13cf\") " pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-f4wvd" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.530747 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12cb9a34-612d-41ed-83d2-6c9cec8d13cf-client-ca\") pod \"route-controller-manager-7cc7864974-f4wvd\" (UID: \"12cb9a34-612d-41ed-83d2-6c9cec8d13cf\") " pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-f4wvd" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.530845 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32825ea0-7e19-4846-8c42-7b4fb6b89c03-client-ca\") pod \"controller-manager-d6f97d578-sdb2l\" (UID: \"32825ea0-7e19-4846-8c42-7b4fb6b89c03\") " pod="openshift-controller-manager/controller-manager-d6f97d578-sdb2l" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.530923 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12cb9a34-612d-41ed-83d2-6c9cec8d13cf-config\") pod \"route-controller-manager-7cc7864974-f4wvd\" (UID: \"12cb9a34-612d-41ed-83d2-6c9cec8d13cf\") " pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-f4wvd" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.530966 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32825ea0-7e19-4846-8c42-7b4fb6b89c03-proxy-ca-bundles\") pod \"controller-manager-d6f97d578-sdb2l\" (UID: \"32825ea0-7e19-4846-8c42-7b4fb6b89c03\") " pod="openshift-controller-manager/controller-manager-d6f97d578-sdb2l" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.531022 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmmvw\" (UniqueName: \"kubernetes.io/projected/32825ea0-7e19-4846-8c42-7b4fb6b89c03-kube-api-access-vmmvw\") pod \"controller-manager-d6f97d578-sdb2l\" (UID: \"32825ea0-7e19-4846-8c42-7b4fb6b89c03\") " pod="openshift-controller-manager/controller-manager-d6f97d578-sdb2l" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.531062 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32825ea0-7e19-4846-8c42-7b4fb6b89c03-config\") pod \"controller-manager-d6f97d578-sdb2l\" (UID: \"32825ea0-7e19-4846-8c42-7b4fb6b89c03\") " pod="openshift-controller-manager/controller-manager-d6f97d578-sdb2l" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.531096 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32825ea0-7e19-4846-8c42-7b4fb6b89c03-serving-cert\") pod \"controller-manager-d6f97d578-sdb2l\" (UID: \"32825ea0-7e19-4846-8c42-7b4fb6b89c03\") " pod="openshift-controller-manager/controller-manager-d6f97d578-sdb2l" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.531157 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkqcf\" (UniqueName: \"kubernetes.io/projected/12cb9a34-612d-41ed-83d2-6c9cec8d13cf-kube-api-access-rkqcf\") pod \"route-controller-manager-7cc7864974-f4wvd\" (UID: \"12cb9a34-612d-41ed-83d2-6c9cec8d13cf\") " pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-f4wvd" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.531192 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12cb9a34-612d-41ed-83d2-6c9cec8d13cf-serving-cert\") pod \"route-controller-manager-7cc7864974-f4wvd\" (UID: \"12cb9a34-612d-41ed-83d2-6c9cec8d13cf\") " pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-f4wvd" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.532539 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12cb9a34-612d-41ed-83d2-6c9cec8d13cf-client-ca\") pod \"route-controller-manager-7cc7864974-f4wvd\" (UID: \"12cb9a34-612d-41ed-83d2-6c9cec8d13cf\") " pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-f4wvd" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.532549 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32825ea0-7e19-4846-8c42-7b4fb6b89c03-client-ca\") pod \"controller-manager-d6f97d578-sdb2l\" (UID: \"32825ea0-7e19-4846-8c42-7b4fb6b89c03\") " pod="openshift-controller-manager/controller-manager-d6f97d578-sdb2l" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.532763 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32825ea0-7e19-4846-8c42-7b4fb6b89c03-proxy-ca-bundles\") pod \"controller-manager-d6f97d578-sdb2l\" (UID: \"32825ea0-7e19-4846-8c42-7b4fb6b89c03\") " pod="openshift-controller-manager/controller-manager-d6f97d578-sdb2l" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.532927 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32825ea0-7e19-4846-8c42-7b4fb6b89c03-config\") pod \"controller-manager-d6f97d578-sdb2l\" (UID: \"32825ea0-7e19-4846-8c42-7b4fb6b89c03\") " pod="openshift-controller-manager/controller-manager-d6f97d578-sdb2l" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.534399 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12cb9a34-612d-41ed-83d2-6c9cec8d13cf-config\") pod \"route-controller-manager-7cc7864974-f4wvd\" (UID: \"12cb9a34-612d-41ed-83d2-6c9cec8d13cf\") " pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-f4wvd" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.538385 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32825ea0-7e19-4846-8c42-7b4fb6b89c03-serving-cert\") pod \"controller-manager-d6f97d578-sdb2l\" (UID: \"32825ea0-7e19-4846-8c42-7b4fb6b89c03\") " pod="openshift-controller-manager/controller-manager-d6f97d578-sdb2l" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.538874 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12cb9a34-612d-41ed-83d2-6c9cec8d13cf-serving-cert\") pod \"route-controller-manager-7cc7864974-f4wvd\" (UID: \"12cb9a34-612d-41ed-83d2-6c9cec8d13cf\") " pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-f4wvd" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.562088 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmmvw\" (UniqueName: \"kubernetes.io/projected/32825ea0-7e19-4846-8c42-7b4fb6b89c03-kube-api-access-vmmvw\") pod \"controller-manager-d6f97d578-sdb2l\" (UID: \"32825ea0-7e19-4846-8c42-7b4fb6b89c03\") " pod="openshift-controller-manager/controller-manager-d6f97d578-sdb2l" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.562399 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d6f97d578-sdb2l" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.562653 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkqcf\" (UniqueName: \"kubernetes.io/projected/12cb9a34-612d-41ed-83d2-6c9cec8d13cf-kube-api-access-rkqcf\") pod \"route-controller-manager-7cc7864974-f4wvd\" (UID: \"12cb9a34-612d-41ed-83d2-6c9cec8d13cf\") " pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-f4wvd" Dec 13 03:50:28 crc kubenswrapper[4766]: I1213 03:50:28.571388 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-f4wvd" Dec 13 03:50:29 crc kubenswrapper[4766]: I1213 03:50:29.009659 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-sdb2l"] Dec 13 03:50:29 crc kubenswrapper[4766]: I1213 03:50:29.018104 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cc7864974-f4wvd"] Dec 13 03:50:29 crc kubenswrapper[4766]: W1213 03:50:29.019885 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32825ea0_7e19_4846_8c42_7b4fb6b89c03.slice/crio-677d51d1170c7962ec4ecddc6043c675b7e34ddefefe59bc7bf1697d65db207b WatchSource:0}: Error finding container 677d51d1170c7962ec4ecddc6043c675b7e34ddefefe59bc7bf1697d65db207b: Status 404 returned error can't find the container with id 677d51d1170c7962ec4ecddc6043c675b7e34ddefefe59bc7bf1697d65db207b Dec 13 03:50:29 crc kubenswrapper[4766]: I1213 03:50:29.156905 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d6f97d578-sdb2l" event={"ID":"32825ea0-7e19-4846-8c42-7b4fb6b89c03","Type":"ContainerStarted","Data":"a91bf1f074b1045f007ae344a2041de248557ac99f06b6d45b477584f5b6efab"} Dec 13 03:50:29 crc kubenswrapper[4766]: I1213 03:50:29.157376 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d6f97d578-sdb2l" Dec 13 03:50:29 crc kubenswrapper[4766]: I1213 03:50:29.157393 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d6f97d578-sdb2l" event={"ID":"32825ea0-7e19-4846-8c42-7b4fb6b89c03","Type":"ContainerStarted","Data":"677d51d1170c7962ec4ecddc6043c675b7e34ddefefe59bc7bf1697d65db207b"} Dec 13 03:50:29 crc kubenswrapper[4766]: I1213 03:50:29.159768 4766 patch_prober.go:28] interesting pod/controller-manager-d6f97d578-sdb2l container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" start-of-body= Dec 13 03:50:29 crc kubenswrapper[4766]: I1213 03:50:29.159812 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-d6f97d578-sdb2l" podUID="32825ea0-7e19-4846-8c42-7b4fb6b89c03" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" Dec 13 03:50:29 crc kubenswrapper[4766]: I1213 03:50:29.162477 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-f4wvd" event={"ID":"12cb9a34-612d-41ed-83d2-6c9cec8d13cf","Type":"ContainerStarted","Data":"eb2956678e5c3f6b6f5a21dd7265854154ea064ea065ec5e3803b46c724f02ba"} Dec 13 03:50:29 crc kubenswrapper[4766]: I1213 03:50:29.162575 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-f4wvd" event={"ID":"12cb9a34-612d-41ed-83d2-6c9cec8d13cf","Type":"ContainerStarted","Data":"feb7d3743026d0608f70b7df50b7333feac10cb13de8a915ac6ce393bfdcbed1"} Dec 13 03:50:29 crc kubenswrapper[4766]: I1213 03:50:29.162836 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-f4wvd" Dec 13 03:50:29 crc kubenswrapper[4766]: I1213 03:50:29.164540 4766 patch_prober.go:28] interesting pod/route-controller-manager-7cc7864974-f4wvd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" start-of-body= Dec 13 03:50:29 crc kubenswrapper[4766]: I1213 03:50:29.164605 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-f4wvd" podUID="12cb9a34-612d-41ed-83d2-6c9cec8d13cf" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" Dec 13 03:50:29 crc kubenswrapper[4766]: I1213 03:50:29.176110 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d6f97d578-sdb2l" podStartSLOduration=3.176090448 podStartE2EDuration="3.176090448s" podCreationTimestamp="2025-12-13 03:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:50:29.175580183 +0000 UTC m=+360.685513147" watchObservedRunningTime="2025-12-13 03:50:29.176090448 +0000 UTC m=+360.686023412" Dec 13 03:50:29 crc kubenswrapper[4766]: I1213 03:50:29.197625 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-f4wvd" podStartSLOduration=3.197600507 podStartE2EDuration="3.197600507s" podCreationTimestamp="2025-12-13 03:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:50:29.191650323 +0000 UTC m=+360.701583307" watchObservedRunningTime="2025-12-13 03:50:29.197600507 +0000 UTC m=+360.707533471" Dec 13 03:50:30 crc kubenswrapper[4766]: I1213 03:50:30.198030 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-f4wvd" Dec 13 03:50:30 crc kubenswrapper[4766]: I1213 03:50:30.198091 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d6f97d578-sdb2l" Dec 13 03:50:39 crc kubenswrapper[4766]: I1213 03:50:39.732073 4766 patch_prober.go:28] interesting pod/machine-config-daemon-94w9l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 03:50:39 crc kubenswrapper[4766]: I1213 03:50:39.732727 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 03:51:06 crc kubenswrapper[4766]: I1213 03:51:06.818364 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cc7864974-f4wvd"] Dec 13 03:51:06 crc kubenswrapper[4766]: I1213 03:51:06.819242 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-f4wvd" podUID="12cb9a34-612d-41ed-83d2-6c9cec8d13cf" containerName="route-controller-manager" containerID="cri-o://eb2956678e5c3f6b6f5a21dd7265854154ea064ea065ec5e3803b46c724f02ba" gracePeriod=30 Dec 13 03:51:07 crc kubenswrapper[4766]: I1213 03:51:07.443964 4766 generic.go:334] "Generic (PLEG): container finished" podID="12cb9a34-612d-41ed-83d2-6c9cec8d13cf" containerID="eb2956678e5c3f6b6f5a21dd7265854154ea064ea065ec5e3803b46c724f02ba" exitCode=0 Dec 13 03:51:07 crc kubenswrapper[4766]: I1213 03:51:07.444088 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-f4wvd" event={"ID":"12cb9a34-612d-41ed-83d2-6c9cec8d13cf","Type":"ContainerDied","Data":"eb2956678e5c3f6b6f5a21dd7265854154ea064ea065ec5e3803b46c724f02ba"} Dec 13 03:51:07 crc kubenswrapper[4766]: I1213 03:51:07.759364 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-f4wvd" Dec 13 03:51:07 crc kubenswrapper[4766]: I1213 03:51:07.897638 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12cb9a34-612d-41ed-83d2-6c9cec8d13cf-serving-cert\") pod \"12cb9a34-612d-41ed-83d2-6c9cec8d13cf\" (UID: \"12cb9a34-612d-41ed-83d2-6c9cec8d13cf\") " Dec 13 03:51:07 crc kubenswrapper[4766]: I1213 03:51:07.897931 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12cb9a34-612d-41ed-83d2-6c9cec8d13cf-config\") pod \"12cb9a34-612d-41ed-83d2-6c9cec8d13cf\" (UID: \"12cb9a34-612d-41ed-83d2-6c9cec8d13cf\") " Dec 13 03:51:07 crc kubenswrapper[4766]: I1213 03:51:07.897954 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12cb9a34-612d-41ed-83d2-6c9cec8d13cf-client-ca\") pod \"12cb9a34-612d-41ed-83d2-6c9cec8d13cf\" (UID: \"12cb9a34-612d-41ed-83d2-6c9cec8d13cf\") " Dec 13 03:51:07 crc kubenswrapper[4766]: I1213 03:51:07.897980 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkqcf\" (UniqueName: \"kubernetes.io/projected/12cb9a34-612d-41ed-83d2-6c9cec8d13cf-kube-api-access-rkqcf\") pod \"12cb9a34-612d-41ed-83d2-6c9cec8d13cf\" (UID: \"12cb9a34-612d-41ed-83d2-6c9cec8d13cf\") " Dec 13 03:51:07 crc kubenswrapper[4766]: I1213 03:51:07.898916 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12cb9a34-612d-41ed-83d2-6c9cec8d13cf-config" (OuterVolumeSpecName: "config") pod "12cb9a34-612d-41ed-83d2-6c9cec8d13cf" (UID: "12cb9a34-612d-41ed-83d2-6c9cec8d13cf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:51:07 crc kubenswrapper[4766]: I1213 03:51:07.899091 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12cb9a34-612d-41ed-83d2-6c9cec8d13cf-client-ca" (OuterVolumeSpecName: "client-ca") pod "12cb9a34-612d-41ed-83d2-6c9cec8d13cf" (UID: "12cb9a34-612d-41ed-83d2-6c9cec8d13cf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:51:07 crc kubenswrapper[4766]: I1213 03:51:07.903326 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12cb9a34-612d-41ed-83d2-6c9cec8d13cf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "12cb9a34-612d-41ed-83d2-6c9cec8d13cf" (UID: "12cb9a34-612d-41ed-83d2-6c9cec8d13cf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:51:07 crc kubenswrapper[4766]: I1213 03:51:07.910341 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12cb9a34-612d-41ed-83d2-6c9cec8d13cf-kube-api-access-rkqcf" (OuterVolumeSpecName: "kube-api-access-rkqcf") pod "12cb9a34-612d-41ed-83d2-6c9cec8d13cf" (UID: "12cb9a34-612d-41ed-83d2-6c9cec8d13cf"). InnerVolumeSpecName "kube-api-access-rkqcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:51:08 crc kubenswrapper[4766]: I1213 03:51:08.001159 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12cb9a34-612d-41ed-83d2-6c9cec8d13cf-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:08 crc kubenswrapper[4766]: I1213 03:51:08.001203 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12cb9a34-612d-41ed-83d2-6c9cec8d13cf-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:08 crc kubenswrapper[4766]: I1213 03:51:08.001215 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkqcf\" (UniqueName: \"kubernetes.io/projected/12cb9a34-612d-41ed-83d2-6c9cec8d13cf-kube-api-access-rkqcf\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:08 crc kubenswrapper[4766]: I1213 03:51:08.001224 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12cb9a34-612d-41ed-83d2-6c9cec8d13cf-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:08 crc kubenswrapper[4766]: I1213 03:51:08.266543 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-df9db78f9-w767s"] Dec 13 03:51:08 crc kubenswrapper[4766]: E1213 03:51:08.266839 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12cb9a34-612d-41ed-83d2-6c9cec8d13cf" containerName="route-controller-manager" Dec 13 03:51:08 crc kubenswrapper[4766]: I1213 03:51:08.266866 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="12cb9a34-612d-41ed-83d2-6c9cec8d13cf" containerName="route-controller-manager" Dec 13 03:51:08 crc kubenswrapper[4766]: I1213 03:51:08.266978 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="12cb9a34-612d-41ed-83d2-6c9cec8d13cf" containerName="route-controller-manager" Dec 13 03:51:08 crc kubenswrapper[4766]: I1213 03:51:08.267366 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-w767s" Dec 13 03:51:08 crc kubenswrapper[4766]: I1213 03:51:08.282478 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-df9db78f9-w767s"] Dec 13 03:51:08 crc kubenswrapper[4766]: I1213 03:51:08.406611 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8472f899-493c-4552-a72f-0df759333e6f-serving-cert\") pod \"route-controller-manager-df9db78f9-w767s\" (UID: \"8472f899-493c-4552-a72f-0df759333e6f\") " pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-w767s" Dec 13 03:51:08 crc kubenswrapper[4766]: I1213 03:51:08.406660 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxnhx\" (UniqueName: \"kubernetes.io/projected/8472f899-493c-4552-a72f-0df759333e6f-kube-api-access-wxnhx\") pod \"route-controller-manager-df9db78f9-w767s\" (UID: \"8472f899-493c-4552-a72f-0df759333e6f\") " pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-w767s" Dec 13 03:51:08 crc kubenswrapper[4766]: I1213 03:51:08.406705 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8472f899-493c-4552-a72f-0df759333e6f-config\") pod \"route-controller-manager-df9db78f9-w767s\" (UID: \"8472f899-493c-4552-a72f-0df759333e6f\") " pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-w767s" Dec 13 03:51:08 crc kubenswrapper[4766]: I1213 03:51:08.406885 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8472f899-493c-4552-a72f-0df759333e6f-client-ca\") pod \"route-controller-manager-df9db78f9-w767s\" (UID: \"8472f899-493c-4552-a72f-0df759333e6f\") " pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-w767s" Dec 13 03:51:08 crc kubenswrapper[4766]: I1213 03:51:08.451033 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-f4wvd" event={"ID":"12cb9a34-612d-41ed-83d2-6c9cec8d13cf","Type":"ContainerDied","Data":"feb7d3743026d0608f70b7df50b7333feac10cb13de8a915ac6ce393bfdcbed1"} Dec 13 03:51:08 crc kubenswrapper[4766]: I1213 03:51:08.451099 4766 scope.go:117] "RemoveContainer" containerID="eb2956678e5c3f6b6f5a21dd7265854154ea064ea065ec5e3803b46c724f02ba" Dec 13 03:51:08 crc kubenswrapper[4766]: I1213 03:51:08.451095 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-f4wvd" Dec 13 03:51:08 crc kubenswrapper[4766]: I1213 03:51:08.480852 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cc7864974-f4wvd"] Dec 13 03:51:08 crc kubenswrapper[4766]: I1213 03:51:08.485250 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cc7864974-f4wvd"] Dec 13 03:51:08 crc kubenswrapper[4766]: I1213 03:51:08.508490 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8472f899-493c-4552-a72f-0df759333e6f-serving-cert\") pod \"route-controller-manager-df9db78f9-w767s\" (UID: \"8472f899-493c-4552-a72f-0df759333e6f\") " pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-w767s" Dec 13 03:51:08 crc kubenswrapper[4766]: I1213 03:51:08.508540 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxnhx\" (UniqueName: \"kubernetes.io/projected/8472f899-493c-4552-a72f-0df759333e6f-kube-api-access-wxnhx\") pod \"route-controller-manager-df9db78f9-w767s\" (UID: \"8472f899-493c-4552-a72f-0df759333e6f\") " pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-w767s" Dec 13 03:51:08 crc kubenswrapper[4766]: I1213 03:51:08.508584 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8472f899-493c-4552-a72f-0df759333e6f-config\") pod \"route-controller-manager-df9db78f9-w767s\" (UID: \"8472f899-493c-4552-a72f-0df759333e6f\") " pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-w767s" Dec 13 03:51:08 crc kubenswrapper[4766]: I1213 03:51:08.508629 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8472f899-493c-4552-a72f-0df759333e6f-client-ca\") pod \"route-controller-manager-df9db78f9-w767s\" (UID: \"8472f899-493c-4552-a72f-0df759333e6f\") " pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-w767s" Dec 13 03:51:08 crc kubenswrapper[4766]: I1213 03:51:08.509761 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8472f899-493c-4552-a72f-0df759333e6f-client-ca\") pod \"route-controller-manager-df9db78f9-w767s\" (UID: \"8472f899-493c-4552-a72f-0df759333e6f\") " pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-w767s" Dec 13 03:51:08 crc kubenswrapper[4766]: I1213 03:51:08.510002 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8472f899-493c-4552-a72f-0df759333e6f-config\") pod \"route-controller-manager-df9db78f9-w767s\" (UID: \"8472f899-493c-4552-a72f-0df759333e6f\") " pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-w767s" Dec 13 03:51:08 crc kubenswrapper[4766]: I1213 03:51:08.515399 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8472f899-493c-4552-a72f-0df759333e6f-serving-cert\") pod \"route-controller-manager-df9db78f9-w767s\" (UID: \"8472f899-493c-4552-a72f-0df759333e6f\") " pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-w767s" Dec 13 03:51:08 crc kubenswrapper[4766]: I1213 03:51:08.533515 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxnhx\" (UniqueName: \"kubernetes.io/projected/8472f899-493c-4552-a72f-0df759333e6f-kube-api-access-wxnhx\") pod \"route-controller-manager-df9db78f9-w767s\" (UID: \"8472f899-493c-4552-a72f-0df759333e6f\") " pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-w767s" Dec 13 03:51:08 crc kubenswrapper[4766]: I1213 03:51:08.586842 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-w767s" Dec 13 03:51:08 crc kubenswrapper[4766]: I1213 03:51:08.996518 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-df9db78f9-w767s"] Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.100990 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fs4sf"] Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.101848 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-fs4sf" Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.113620 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fs4sf"] Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.235640 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6500a3e1-220b-47db-a270-61d983731a03-registry-certificates\") pod \"image-registry-66df7c8f76-fs4sf\" (UID: \"6500a3e1-220b-47db-a270-61d983731a03\") " pod="openshift-image-registry/image-registry-66df7c8f76-fs4sf" Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.236155 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6500a3e1-220b-47db-a270-61d983731a03-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fs4sf\" (UID: \"6500a3e1-220b-47db-a270-61d983731a03\") " pod="openshift-image-registry/image-registry-66df7c8f76-fs4sf" Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.236184 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6500a3e1-220b-47db-a270-61d983731a03-bound-sa-token\") pod \"image-registry-66df7c8f76-fs4sf\" (UID: \"6500a3e1-220b-47db-a270-61d983731a03\") " pod="openshift-image-registry/image-registry-66df7c8f76-fs4sf" Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.236208 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6500a3e1-220b-47db-a270-61d983731a03-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fs4sf\" (UID: \"6500a3e1-220b-47db-a270-61d983731a03\") " pod="openshift-image-registry/image-registry-66df7c8f76-fs4sf" Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.236242 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6rlq\" (UniqueName: \"kubernetes.io/projected/6500a3e1-220b-47db-a270-61d983731a03-kube-api-access-d6rlq\") pod \"image-registry-66df7c8f76-fs4sf\" (UID: \"6500a3e1-220b-47db-a270-61d983731a03\") " pod="openshift-image-registry/image-registry-66df7c8f76-fs4sf" Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.236407 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6500a3e1-220b-47db-a270-61d983731a03-registry-tls\") pod \"image-registry-66df7c8f76-fs4sf\" (UID: \"6500a3e1-220b-47db-a270-61d983731a03\") " pod="openshift-image-registry/image-registry-66df7c8f76-fs4sf" Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.236588 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-fs4sf\" (UID: \"6500a3e1-220b-47db-a270-61d983731a03\") " pod="openshift-image-registry/image-registry-66df7c8f76-fs4sf" Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.236736 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6500a3e1-220b-47db-a270-61d983731a03-trusted-ca\") pod \"image-registry-66df7c8f76-fs4sf\" (UID: \"6500a3e1-220b-47db-a270-61d983731a03\") " pod="openshift-image-registry/image-registry-66df7c8f76-fs4sf" Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.297524 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-fs4sf\" (UID: \"6500a3e1-220b-47db-a270-61d983731a03\") " pod="openshift-image-registry/image-registry-66df7c8f76-fs4sf" Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.338214 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6500a3e1-220b-47db-a270-61d983731a03-trusted-ca\") pod \"image-registry-66df7c8f76-fs4sf\" (UID: \"6500a3e1-220b-47db-a270-61d983731a03\") " pod="openshift-image-registry/image-registry-66df7c8f76-fs4sf" Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.338300 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6500a3e1-220b-47db-a270-61d983731a03-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fs4sf\" (UID: \"6500a3e1-220b-47db-a270-61d983731a03\") " pod="openshift-image-registry/image-registry-66df7c8f76-fs4sf" Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.338320 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6500a3e1-220b-47db-a270-61d983731a03-registry-certificates\") pod \"image-registry-66df7c8f76-fs4sf\" (UID: \"6500a3e1-220b-47db-a270-61d983731a03\") " pod="openshift-image-registry/image-registry-66df7c8f76-fs4sf" Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.338340 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6500a3e1-220b-47db-a270-61d983731a03-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fs4sf\" (UID: \"6500a3e1-220b-47db-a270-61d983731a03\") " pod="openshift-image-registry/image-registry-66df7c8f76-fs4sf" Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.338357 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6500a3e1-220b-47db-a270-61d983731a03-bound-sa-token\") pod \"image-registry-66df7c8f76-fs4sf\" (UID: \"6500a3e1-220b-47db-a270-61d983731a03\") " pod="openshift-image-registry/image-registry-66df7c8f76-fs4sf" Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.338379 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6rlq\" (UniqueName: \"kubernetes.io/projected/6500a3e1-220b-47db-a270-61d983731a03-kube-api-access-d6rlq\") pod \"image-registry-66df7c8f76-fs4sf\" (UID: \"6500a3e1-220b-47db-a270-61d983731a03\") " pod="openshift-image-registry/image-registry-66df7c8f76-fs4sf" Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.338399 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6500a3e1-220b-47db-a270-61d983731a03-registry-tls\") pod \"image-registry-66df7c8f76-fs4sf\" (UID: \"6500a3e1-220b-47db-a270-61d983731a03\") " pod="openshift-image-registry/image-registry-66df7c8f76-fs4sf" Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.340484 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6500a3e1-220b-47db-a270-61d983731a03-registry-certificates\") pod \"image-registry-66df7c8f76-fs4sf\" (UID: \"6500a3e1-220b-47db-a270-61d983731a03\") " pod="openshift-image-registry/image-registry-66df7c8f76-fs4sf" Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.341942 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6500a3e1-220b-47db-a270-61d983731a03-trusted-ca\") pod \"image-registry-66df7c8f76-fs4sf\" (UID: \"6500a3e1-220b-47db-a270-61d983731a03\") " pod="openshift-image-registry/image-registry-66df7c8f76-fs4sf" Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.342758 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6500a3e1-220b-47db-a270-61d983731a03-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fs4sf\" (UID: \"6500a3e1-220b-47db-a270-61d983731a03\") " pod="openshift-image-registry/image-registry-66df7c8f76-fs4sf" Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.344198 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6500a3e1-220b-47db-a270-61d983731a03-registry-tls\") pod \"image-registry-66df7c8f76-fs4sf\" (UID: \"6500a3e1-220b-47db-a270-61d983731a03\") " pod="openshift-image-registry/image-registry-66df7c8f76-fs4sf" Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.347012 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6500a3e1-220b-47db-a270-61d983731a03-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fs4sf\" (UID: \"6500a3e1-220b-47db-a270-61d983731a03\") " pod="openshift-image-registry/image-registry-66df7c8f76-fs4sf" Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.360528 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6500a3e1-220b-47db-a270-61d983731a03-bound-sa-token\") pod \"image-registry-66df7c8f76-fs4sf\" (UID: \"6500a3e1-220b-47db-a270-61d983731a03\") " pod="openshift-image-registry/image-registry-66df7c8f76-fs4sf" Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.360899 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6rlq\" (UniqueName: \"kubernetes.io/projected/6500a3e1-220b-47db-a270-61d983731a03-kube-api-access-d6rlq\") pod \"image-registry-66df7c8f76-fs4sf\" (UID: \"6500a3e1-220b-47db-a270-61d983731a03\") " pod="openshift-image-registry/image-registry-66df7c8f76-fs4sf" Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.418339 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-fs4sf" Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.458501 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-w767s" event={"ID":"8472f899-493c-4552-a72f-0df759333e6f","Type":"ContainerStarted","Data":"5cfbbe63a8952d9e5adbf0ef68d44900583aee4f3660acc9f85ca94d141b3848"} Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.458555 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-w767s" event={"ID":"8472f899-493c-4552-a72f-0df759333e6f","Type":"ContainerStarted","Data":"14bdd4199e30e84f279a7abe8ff0897a5487ce4c26d927db58360f861eb7b9f8"} Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.458776 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-w767s" Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.476379 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-w767s" podStartSLOduration=3.476357937 podStartE2EDuration="3.476357937s" podCreationTimestamp="2025-12-13 03:51:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:51:09.473504834 +0000 UTC m=+400.983437818" watchObservedRunningTime="2025-12-13 03:51:09.476357937 +0000 UTC m=+400.986290911" Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.625534 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12cb9a34-612d-41ed-83d2-6c9cec8d13cf" path="/var/lib/kubelet/pods/12cb9a34-612d-41ed-83d2-6c9cec8d13cf/volumes" Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.733506 4766 patch_prober.go:28] interesting pod/machine-config-daemon-94w9l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.733594 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.762895 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-df9db78f9-w767s" Dec 13 03:51:09 crc kubenswrapper[4766]: I1213 03:51:09.868894 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fs4sf"] Dec 13 03:51:09 crc kubenswrapper[4766]: W1213 03:51:09.874803 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6500a3e1_220b_47db_a270_61d983731a03.slice/crio-0fa07cf887e01a93878fb0201c11747a4d458905724a13230870268b82e5bee5 WatchSource:0}: Error finding container 0fa07cf887e01a93878fb0201c11747a4d458905724a13230870268b82e5bee5: Status 404 returned error can't find the container with id 0fa07cf887e01a93878fb0201c11747a4d458905724a13230870268b82e5bee5 Dec 13 03:51:10 crc kubenswrapper[4766]: I1213 03:51:10.478999 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-fs4sf" event={"ID":"6500a3e1-220b-47db-a270-61d983731a03","Type":"ContainerStarted","Data":"57d8e56f77d9e3444bd54fef5745bf892999c4b4daf8783262126c2b5cabfc4e"} Dec 13 03:51:10 crc kubenswrapper[4766]: I1213 03:51:10.479843 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-fs4sf" event={"ID":"6500a3e1-220b-47db-a270-61d983731a03","Type":"ContainerStarted","Data":"0fa07cf887e01a93878fb0201c11747a4d458905724a13230870268b82e5bee5"} Dec 13 03:51:10 crc kubenswrapper[4766]: I1213 03:51:10.479878 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-fs4sf" Dec 13 03:51:10 crc kubenswrapper[4766]: I1213 03:51:10.501946 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-fs4sf" podStartSLOduration=1.5019265370000001 podStartE2EDuration="1.501926537s" podCreationTimestamp="2025-12-13 03:51:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:51:10.497674334 +0000 UTC m=+402.007607308" watchObservedRunningTime="2025-12-13 03:51:10.501926537 +0000 UTC m=+402.011859501" Dec 13 03:51:18 crc kubenswrapper[4766]: I1213 03:51:18.696152 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d95nn"] Dec 13 03:51:18 crc kubenswrapper[4766]: I1213 03:51:18.697037 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d95nn" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" containerName="registry-server" containerID="cri-o://c67142ece289f91db175f19e66a1fad7c98085cb680093abbf792615d72a9259" gracePeriod=2 Dec 13 03:51:18 crc kubenswrapper[4766]: I1213 03:51:18.897719 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kq9t5"] Dec 13 03:51:18 crc kubenswrapper[4766]: I1213 03:51:18.898299 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kq9t5" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" containerName="registry-server" containerID="cri-o://54b972ade6b3364a8e811a02f31ecc63c93cc4aeeacad3bae53a776af1218d5c" gracePeriod=2 Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.342644 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kq9t5" Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.488532 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/116c9e00-53c9-4d49-813e-2f0dd2b24411-catalog-content\") pod \"116c9e00-53c9-4d49-813e-2f0dd2b24411\" (UID: \"116c9e00-53c9-4d49-813e-2f0dd2b24411\") " Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.488619 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hks6n\" (UniqueName: \"kubernetes.io/projected/116c9e00-53c9-4d49-813e-2f0dd2b24411-kube-api-access-hks6n\") pod \"116c9e00-53c9-4d49-813e-2f0dd2b24411\" (UID: \"116c9e00-53c9-4d49-813e-2f0dd2b24411\") " Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.488750 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/116c9e00-53c9-4d49-813e-2f0dd2b24411-utilities\") pod \"116c9e00-53c9-4d49-813e-2f0dd2b24411\" (UID: \"116c9e00-53c9-4d49-813e-2f0dd2b24411\") " Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.489998 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/116c9e00-53c9-4d49-813e-2f0dd2b24411-utilities" (OuterVolumeSpecName: "utilities") pod "116c9e00-53c9-4d49-813e-2f0dd2b24411" (UID: "116c9e00-53c9-4d49-813e-2f0dd2b24411"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.497691 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/116c9e00-53c9-4d49-813e-2f0dd2b24411-kube-api-access-hks6n" (OuterVolumeSpecName: "kube-api-access-hks6n") pod "116c9e00-53c9-4d49-813e-2f0dd2b24411" (UID: "116c9e00-53c9-4d49-813e-2f0dd2b24411"). InnerVolumeSpecName "kube-api-access-hks6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.551882 4766 generic.go:334] "Generic (PLEG): container finished" podID="9c50096c-0297-4e96-868c-d34cfc326d46" containerID="c67142ece289f91db175f19e66a1fad7c98085cb680093abbf792615d72a9259" exitCode=0 Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.552266 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d95nn" event={"ID":"9c50096c-0297-4e96-868c-d34cfc326d46","Type":"ContainerDied","Data":"c67142ece289f91db175f19e66a1fad7c98085cb680093abbf792615d72a9259"} Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.554511 4766 generic.go:334] "Generic (PLEG): container finished" podID="116c9e00-53c9-4d49-813e-2f0dd2b24411" containerID="54b972ade6b3364a8e811a02f31ecc63c93cc4aeeacad3bae53a776af1218d5c" exitCode=0 Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.554569 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kq9t5" event={"ID":"116c9e00-53c9-4d49-813e-2f0dd2b24411","Type":"ContainerDied","Data":"54b972ade6b3364a8e811a02f31ecc63c93cc4aeeacad3bae53a776af1218d5c"} Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.554606 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kq9t5" event={"ID":"116c9e00-53c9-4d49-813e-2f0dd2b24411","Type":"ContainerDied","Data":"a5f248f6495e69cafe2f44f6d11a6470db5d35198102e3478dd4050dca12ff45"} Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.554629 4766 scope.go:117] "RemoveContainer" containerID="54b972ade6b3364a8e811a02f31ecc63c93cc4aeeacad3bae53a776af1218d5c" Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.554809 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kq9t5" Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.570162 4766 scope.go:117] "RemoveContainer" containerID="4b406ba1b4814d64750fc3482cb2671f9e78737570826d31e59918c69c0cfe18" Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.571765 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d95nn" Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.587359 4766 scope.go:117] "RemoveContainer" containerID="55ee5b63cb5141f373ff835a985330cd173c4e267ac23dc682bf2f468dbf2400" Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.591653 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hks6n\" (UniqueName: \"kubernetes.io/projected/116c9e00-53c9-4d49-813e-2f0dd2b24411-kube-api-access-hks6n\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.591758 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/116c9e00-53c9-4d49-813e-2f0dd2b24411-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.611683 4766 scope.go:117] "RemoveContainer" containerID="54b972ade6b3364a8e811a02f31ecc63c93cc4aeeacad3bae53a776af1218d5c" Dec 13 03:51:19 crc kubenswrapper[4766]: E1213 03:51:19.612614 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54b972ade6b3364a8e811a02f31ecc63c93cc4aeeacad3bae53a776af1218d5c\": container with ID starting with 54b972ade6b3364a8e811a02f31ecc63c93cc4aeeacad3bae53a776af1218d5c not found: ID does not exist" containerID="54b972ade6b3364a8e811a02f31ecc63c93cc4aeeacad3bae53a776af1218d5c" Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.612692 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54b972ade6b3364a8e811a02f31ecc63c93cc4aeeacad3bae53a776af1218d5c"} err="failed to get container status \"54b972ade6b3364a8e811a02f31ecc63c93cc4aeeacad3bae53a776af1218d5c\": rpc error: code = NotFound desc = could not find container \"54b972ade6b3364a8e811a02f31ecc63c93cc4aeeacad3bae53a776af1218d5c\": container with ID starting with 54b972ade6b3364a8e811a02f31ecc63c93cc4aeeacad3bae53a776af1218d5c not found: ID does not exist" Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.612734 4766 scope.go:117] "RemoveContainer" containerID="4b406ba1b4814d64750fc3482cb2671f9e78737570826d31e59918c69c0cfe18" Dec 13 03:51:19 crc kubenswrapper[4766]: E1213 03:51:19.613161 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b406ba1b4814d64750fc3482cb2671f9e78737570826d31e59918c69c0cfe18\": container with ID starting with 4b406ba1b4814d64750fc3482cb2671f9e78737570826d31e59918c69c0cfe18 not found: ID does not exist" containerID="4b406ba1b4814d64750fc3482cb2671f9e78737570826d31e59918c69c0cfe18" Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.613201 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b406ba1b4814d64750fc3482cb2671f9e78737570826d31e59918c69c0cfe18"} err="failed to get container status \"4b406ba1b4814d64750fc3482cb2671f9e78737570826d31e59918c69c0cfe18\": rpc error: code = NotFound desc = could not find container \"4b406ba1b4814d64750fc3482cb2671f9e78737570826d31e59918c69c0cfe18\": container with ID starting with 4b406ba1b4814d64750fc3482cb2671f9e78737570826d31e59918c69c0cfe18 not found: ID does not exist" Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.613234 4766 scope.go:117] "RemoveContainer" containerID="55ee5b63cb5141f373ff835a985330cd173c4e267ac23dc682bf2f468dbf2400" Dec 13 03:51:19 crc kubenswrapper[4766]: E1213 03:51:19.613597 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55ee5b63cb5141f373ff835a985330cd173c4e267ac23dc682bf2f468dbf2400\": container with ID starting with 55ee5b63cb5141f373ff835a985330cd173c4e267ac23dc682bf2f468dbf2400 not found: ID does not exist" containerID="55ee5b63cb5141f373ff835a985330cd173c4e267ac23dc682bf2f468dbf2400" Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.613742 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55ee5b63cb5141f373ff835a985330cd173c4e267ac23dc682bf2f468dbf2400"} err="failed to get container status \"55ee5b63cb5141f373ff835a985330cd173c4e267ac23dc682bf2f468dbf2400\": rpc error: code = NotFound desc = could not find container \"55ee5b63cb5141f373ff835a985330cd173c4e267ac23dc682bf2f468dbf2400\": container with ID starting with 55ee5b63cb5141f373ff835a985330cd173c4e267ac23dc682bf2f468dbf2400 not found: ID does not exist" Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.624895 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/116c9e00-53c9-4d49-813e-2f0dd2b24411-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "116c9e00-53c9-4d49-813e-2f0dd2b24411" (UID: "116c9e00-53c9-4d49-813e-2f0dd2b24411"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.693176 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c50096c-0297-4e96-868c-d34cfc326d46-catalog-content\") pod \"9c50096c-0297-4e96-868c-d34cfc326d46\" (UID: \"9c50096c-0297-4e96-868c-d34cfc326d46\") " Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.693293 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfnnp\" (UniqueName: \"kubernetes.io/projected/9c50096c-0297-4e96-868c-d34cfc326d46-kube-api-access-mfnnp\") pod \"9c50096c-0297-4e96-868c-d34cfc326d46\" (UID: \"9c50096c-0297-4e96-868c-d34cfc326d46\") " Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.693422 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c50096c-0297-4e96-868c-d34cfc326d46-utilities\") pod \"9c50096c-0297-4e96-868c-d34cfc326d46\" (UID: \"9c50096c-0297-4e96-868c-d34cfc326d46\") " Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.693800 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/116c9e00-53c9-4d49-813e-2f0dd2b24411-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.694621 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c50096c-0297-4e96-868c-d34cfc326d46-utilities" (OuterVolumeSpecName: "utilities") pod "9c50096c-0297-4e96-868c-d34cfc326d46" (UID: "9c50096c-0297-4e96-868c-d34cfc326d46"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.697201 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c50096c-0297-4e96-868c-d34cfc326d46-kube-api-access-mfnnp" (OuterVolumeSpecName: "kube-api-access-mfnnp") pod "9c50096c-0297-4e96-868c-d34cfc326d46" (UID: "9c50096c-0297-4e96-868c-d34cfc326d46"). InnerVolumeSpecName "kube-api-access-mfnnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.714101 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c50096c-0297-4e96-868c-d34cfc326d46-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c50096c-0297-4e96-868c-d34cfc326d46" (UID: "9c50096c-0297-4e96-868c-d34cfc326d46"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.795493 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfnnp\" (UniqueName: \"kubernetes.io/projected/9c50096c-0297-4e96-868c-d34cfc326d46-kube-api-access-mfnnp\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.795546 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c50096c-0297-4e96-868c-d34cfc326d46-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.795560 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c50096c-0297-4e96-868c-d34cfc326d46-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.891448 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kq9t5"] Dec 13 03:51:19 crc kubenswrapper[4766]: I1213 03:51:19.901403 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kq9t5"] Dec 13 03:51:20 crc kubenswrapper[4766]: I1213 03:51:20.568547 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d95nn" event={"ID":"9c50096c-0297-4e96-868c-d34cfc326d46","Type":"ContainerDied","Data":"946890c205ce0dc4db884c704885e7114ff78d3d6461b58271cdb13a41e5e910"} Dec 13 03:51:20 crc kubenswrapper[4766]: I1213 03:51:20.568617 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d95nn" Dec 13 03:51:20 crc kubenswrapper[4766]: I1213 03:51:20.568903 4766 scope.go:117] "RemoveContainer" containerID="c67142ece289f91db175f19e66a1fad7c98085cb680093abbf792615d72a9259" Dec 13 03:51:20 crc kubenswrapper[4766]: I1213 03:51:20.587825 4766 scope.go:117] "RemoveContainer" containerID="039ffeb25ca63f1d93918d2e40332955ab662c49c12cc3d99736d6f4f95a0166" Dec 13 03:51:20 crc kubenswrapper[4766]: I1213 03:51:20.606400 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d95nn"] Dec 13 03:51:20 crc kubenswrapper[4766]: I1213 03:51:20.609164 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d95nn"] Dec 13 03:51:20 crc kubenswrapper[4766]: I1213 03:51:20.621457 4766 scope.go:117] "RemoveContainer" containerID="136dee99e7d42b2677f001149c83748d753d105eeb950f5c21432d7a801626a4" Dec 13 03:51:21 crc kubenswrapper[4766]: I1213 03:51:21.301093 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hzmqb"] Dec 13 03:51:21 crc kubenswrapper[4766]: I1213 03:51:21.301448 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hzmqb" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" containerName="registry-server" containerID="cri-o://70c403e7019271062fd03e40a6c7abf6a1a8ad4d852977e528162bf3334f2c15" gracePeriod=2 Dec 13 03:51:21 crc kubenswrapper[4766]: I1213 03:51:21.496702 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wb4lm"] Dec 13 03:51:21 crc kubenswrapper[4766]: I1213 03:51:21.497055 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wb4lm" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" containerName="registry-server" containerID="cri-o://22a2ff3a7e2f71b6336f0fd3bfe531ed8c59a8ebbed034aa9c7d2c1eebdf1176" gracePeriod=2 Dec 13 03:51:21 crc kubenswrapper[4766]: I1213 03:51:21.577561 4766 generic.go:334] "Generic (PLEG): container finished" podID="35afc352-db00-48b9-b888-6b8b9bc36403" containerID="70c403e7019271062fd03e40a6c7abf6a1a8ad4d852977e528162bf3334f2c15" exitCode=0 Dec 13 03:51:21 crc kubenswrapper[4766]: I1213 03:51:21.577637 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzmqb" event={"ID":"35afc352-db00-48b9-b888-6b8b9bc36403","Type":"ContainerDied","Data":"70c403e7019271062fd03e40a6c7abf6a1a8ad4d852977e528162bf3334f2c15"} Dec 13 03:51:21 crc kubenswrapper[4766]: I1213 03:51:21.626494 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" path="/var/lib/kubelet/pods/116c9e00-53c9-4d49-813e-2f0dd2b24411/volumes" Dec 13 03:51:21 crc kubenswrapper[4766]: I1213 03:51:21.627305 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" path="/var/lib/kubelet/pods/9c50096c-0297-4e96-868c-d34cfc326d46/volumes" Dec 13 03:51:21 crc kubenswrapper[4766]: I1213 03:51:21.726225 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzmqb" Dec 13 03:51:21 crc kubenswrapper[4766]: I1213 03:51:21.823400 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35afc352-db00-48b9-b888-6b8b9bc36403-utilities\") pod \"35afc352-db00-48b9-b888-6b8b9bc36403\" (UID: \"35afc352-db00-48b9-b888-6b8b9bc36403\") " Dec 13 03:51:21 crc kubenswrapper[4766]: I1213 03:51:21.823541 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35afc352-db00-48b9-b888-6b8b9bc36403-catalog-content\") pod \"35afc352-db00-48b9-b888-6b8b9bc36403\" (UID: \"35afc352-db00-48b9-b888-6b8b9bc36403\") " Dec 13 03:51:21 crc kubenswrapper[4766]: I1213 03:51:21.823612 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kstrr\" (UniqueName: \"kubernetes.io/projected/35afc352-db00-48b9-b888-6b8b9bc36403-kube-api-access-kstrr\") pod \"35afc352-db00-48b9-b888-6b8b9bc36403\" (UID: \"35afc352-db00-48b9-b888-6b8b9bc36403\") " Dec 13 03:51:21 crc kubenswrapper[4766]: I1213 03:51:21.824402 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35afc352-db00-48b9-b888-6b8b9bc36403-utilities" (OuterVolumeSpecName: "utilities") pod "35afc352-db00-48b9-b888-6b8b9bc36403" (UID: "35afc352-db00-48b9-b888-6b8b9bc36403"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 03:51:21 crc kubenswrapper[4766]: I1213 03:51:21.837637 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35afc352-db00-48b9-b888-6b8b9bc36403-kube-api-access-kstrr" (OuterVolumeSpecName: "kube-api-access-kstrr") pod "35afc352-db00-48b9-b888-6b8b9bc36403" (UID: "35afc352-db00-48b9-b888-6b8b9bc36403"). InnerVolumeSpecName "kube-api-access-kstrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:51:21 crc kubenswrapper[4766]: I1213 03:51:21.878102 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wb4lm" Dec 13 03:51:21 crc kubenswrapper[4766]: I1213 03:51:21.905408 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35afc352-db00-48b9-b888-6b8b9bc36403-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35afc352-db00-48b9-b888-6b8b9bc36403" (UID: "35afc352-db00-48b9-b888-6b8b9bc36403"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 03:51:21 crc kubenswrapper[4766]: I1213 03:51:21.925178 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35afc352-db00-48b9-b888-6b8b9bc36403-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:21 crc kubenswrapper[4766]: I1213 03:51:21.925213 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35afc352-db00-48b9-b888-6b8b9bc36403-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:21 crc kubenswrapper[4766]: I1213 03:51:21.925226 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kstrr\" (UniqueName: \"kubernetes.io/projected/35afc352-db00-48b9-b888-6b8b9bc36403-kube-api-access-kstrr\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:22 crc kubenswrapper[4766]: I1213 03:51:22.026540 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16b3386a-f52e-47b8-a0cb-172dd34f4761-catalog-content\") pod \"16b3386a-f52e-47b8-a0cb-172dd34f4761\" (UID: \"16b3386a-f52e-47b8-a0cb-172dd34f4761\") " Dec 13 03:51:22 crc kubenswrapper[4766]: I1213 03:51:22.026611 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgvff\" (UniqueName: \"kubernetes.io/projected/16b3386a-f52e-47b8-a0cb-172dd34f4761-kube-api-access-qgvff\") pod \"16b3386a-f52e-47b8-a0cb-172dd34f4761\" (UID: \"16b3386a-f52e-47b8-a0cb-172dd34f4761\") " Dec 13 03:51:22 crc kubenswrapper[4766]: I1213 03:51:22.026687 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16b3386a-f52e-47b8-a0cb-172dd34f4761-utilities\") pod \"16b3386a-f52e-47b8-a0cb-172dd34f4761\" (UID: \"16b3386a-f52e-47b8-a0cb-172dd34f4761\") " Dec 13 03:51:22 crc kubenswrapper[4766]: I1213 03:51:22.027398 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16b3386a-f52e-47b8-a0cb-172dd34f4761-utilities" (OuterVolumeSpecName: "utilities") pod "16b3386a-f52e-47b8-a0cb-172dd34f4761" (UID: "16b3386a-f52e-47b8-a0cb-172dd34f4761"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 03:51:22 crc kubenswrapper[4766]: I1213 03:51:22.029360 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16b3386a-f52e-47b8-a0cb-172dd34f4761-kube-api-access-qgvff" (OuterVolumeSpecName: "kube-api-access-qgvff") pod "16b3386a-f52e-47b8-a0cb-172dd34f4761" (UID: "16b3386a-f52e-47b8-a0cb-172dd34f4761"). InnerVolumeSpecName "kube-api-access-qgvff". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:51:22 crc kubenswrapper[4766]: I1213 03:51:22.075223 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16b3386a-f52e-47b8-a0cb-172dd34f4761-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16b3386a-f52e-47b8-a0cb-172dd34f4761" (UID: "16b3386a-f52e-47b8-a0cb-172dd34f4761"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 03:51:22 crc kubenswrapper[4766]: I1213 03:51:22.128097 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16b3386a-f52e-47b8-a0cb-172dd34f4761-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:22 crc kubenswrapper[4766]: I1213 03:51:22.128146 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16b3386a-f52e-47b8-a0cb-172dd34f4761-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:22 crc kubenswrapper[4766]: I1213 03:51:22.128160 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgvff\" (UniqueName: \"kubernetes.io/projected/16b3386a-f52e-47b8-a0cb-172dd34f4761-kube-api-access-qgvff\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:22 crc kubenswrapper[4766]: I1213 03:51:22.586126 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzmqb" event={"ID":"35afc352-db00-48b9-b888-6b8b9bc36403","Type":"ContainerDied","Data":"e5f8bbe62947f8c0b02043627cf632cd5bc7bb2dec7d8b22b2790c533d5cc52f"} Dec 13 03:51:22 crc kubenswrapper[4766]: I1213 03:51:22.586192 4766 scope.go:117] "RemoveContainer" containerID="70c403e7019271062fd03e40a6c7abf6a1a8ad4d852977e528162bf3334f2c15" Dec 13 03:51:22 crc kubenswrapper[4766]: I1213 03:51:22.586311 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzmqb" Dec 13 03:51:22 crc kubenswrapper[4766]: I1213 03:51:22.592032 4766 generic.go:334] "Generic (PLEG): container finished" podID="16b3386a-f52e-47b8-a0cb-172dd34f4761" containerID="22a2ff3a7e2f71b6336f0fd3bfe531ed8c59a8ebbed034aa9c7d2c1eebdf1176" exitCode=0 Dec 13 03:51:22 crc kubenswrapper[4766]: I1213 03:51:22.592080 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wb4lm" Dec 13 03:51:22 crc kubenswrapper[4766]: I1213 03:51:22.592094 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb4lm" event={"ID":"16b3386a-f52e-47b8-a0cb-172dd34f4761","Type":"ContainerDied","Data":"22a2ff3a7e2f71b6336f0fd3bfe531ed8c59a8ebbed034aa9c7d2c1eebdf1176"} Dec 13 03:51:22 crc kubenswrapper[4766]: I1213 03:51:22.592134 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb4lm" event={"ID":"16b3386a-f52e-47b8-a0cb-172dd34f4761","Type":"ContainerDied","Data":"ebbd82bfe21937fc73d1a7edf3560229b34249a02b5e882ed09ee369c511692a"} Dec 13 03:51:22 crc kubenswrapper[4766]: I1213 03:51:22.614475 4766 scope.go:117] "RemoveContainer" containerID="855c45cc8a8626aa9fb7c0b5f22ea9216cc1a7414ddd4f008048095d2170d012" Dec 13 03:51:22 crc kubenswrapper[4766]: I1213 03:51:22.619012 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hzmqb"] Dec 13 03:51:22 crc kubenswrapper[4766]: I1213 03:51:22.624136 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hzmqb"] Dec 13 03:51:22 crc kubenswrapper[4766]: I1213 03:51:22.633584 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wb4lm"] Dec 13 03:51:22 crc kubenswrapper[4766]: I1213 03:51:22.638187 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wb4lm"] Dec 13 03:51:22 crc kubenswrapper[4766]: I1213 03:51:22.642539 4766 scope.go:117] "RemoveContainer" containerID="c7362cd8cc7317a068cca58fe1f3c7f129620d35b745b2af3a91b4ed2c1c83e0" Dec 13 03:51:22 crc kubenswrapper[4766]: I1213 03:51:22.660581 4766 scope.go:117] "RemoveContainer" containerID="22a2ff3a7e2f71b6336f0fd3bfe531ed8c59a8ebbed034aa9c7d2c1eebdf1176" Dec 13 03:51:22 crc kubenswrapper[4766]: I1213 03:51:22.678494 4766 scope.go:117] "RemoveContainer" containerID="649bc64f84bfdf65e43029acae26f1eb8288a1cc79b3bf83fbd7a95fe83282e6" Dec 13 03:51:22 crc kubenswrapper[4766]: I1213 03:51:22.696810 4766 scope.go:117] "RemoveContainer" containerID="96165b00721064373f74c948d38a08a14ac8f872e9e7a657e2681316a693cb87" Dec 13 03:51:22 crc kubenswrapper[4766]: I1213 03:51:22.711846 4766 scope.go:117] "RemoveContainer" containerID="22a2ff3a7e2f71b6336f0fd3bfe531ed8c59a8ebbed034aa9c7d2c1eebdf1176" Dec 13 03:51:22 crc kubenswrapper[4766]: E1213 03:51:22.712313 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22a2ff3a7e2f71b6336f0fd3bfe531ed8c59a8ebbed034aa9c7d2c1eebdf1176\": container with ID starting with 22a2ff3a7e2f71b6336f0fd3bfe531ed8c59a8ebbed034aa9c7d2c1eebdf1176 not found: ID does not exist" containerID="22a2ff3a7e2f71b6336f0fd3bfe531ed8c59a8ebbed034aa9c7d2c1eebdf1176" Dec 13 03:51:22 crc kubenswrapper[4766]: I1213 03:51:22.712356 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22a2ff3a7e2f71b6336f0fd3bfe531ed8c59a8ebbed034aa9c7d2c1eebdf1176"} err="failed to get container status \"22a2ff3a7e2f71b6336f0fd3bfe531ed8c59a8ebbed034aa9c7d2c1eebdf1176\": rpc error: code = NotFound desc = could not find container \"22a2ff3a7e2f71b6336f0fd3bfe531ed8c59a8ebbed034aa9c7d2c1eebdf1176\": container with ID starting with 22a2ff3a7e2f71b6336f0fd3bfe531ed8c59a8ebbed034aa9c7d2c1eebdf1176 not found: ID does not exist" Dec 13 03:51:22 crc kubenswrapper[4766]: I1213 03:51:22.712390 4766 scope.go:117] "RemoveContainer" containerID="649bc64f84bfdf65e43029acae26f1eb8288a1cc79b3bf83fbd7a95fe83282e6" Dec 13 03:51:22 crc kubenswrapper[4766]: E1213 03:51:22.712933 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"649bc64f84bfdf65e43029acae26f1eb8288a1cc79b3bf83fbd7a95fe83282e6\": container with ID starting with 649bc64f84bfdf65e43029acae26f1eb8288a1cc79b3bf83fbd7a95fe83282e6 not found: ID does not exist" containerID="649bc64f84bfdf65e43029acae26f1eb8288a1cc79b3bf83fbd7a95fe83282e6" Dec 13 03:51:22 crc kubenswrapper[4766]: I1213 03:51:22.712966 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"649bc64f84bfdf65e43029acae26f1eb8288a1cc79b3bf83fbd7a95fe83282e6"} err="failed to get container status \"649bc64f84bfdf65e43029acae26f1eb8288a1cc79b3bf83fbd7a95fe83282e6\": rpc error: code = NotFound desc = could not find container \"649bc64f84bfdf65e43029acae26f1eb8288a1cc79b3bf83fbd7a95fe83282e6\": container with ID starting with 649bc64f84bfdf65e43029acae26f1eb8288a1cc79b3bf83fbd7a95fe83282e6 not found: ID does not exist" Dec 13 03:51:22 crc kubenswrapper[4766]: I1213 03:51:22.712989 4766 scope.go:117] "RemoveContainer" containerID="96165b00721064373f74c948d38a08a14ac8f872e9e7a657e2681316a693cb87" Dec 13 03:51:22 crc kubenswrapper[4766]: E1213 03:51:22.713320 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96165b00721064373f74c948d38a08a14ac8f872e9e7a657e2681316a693cb87\": container with ID starting with 96165b00721064373f74c948d38a08a14ac8f872e9e7a657e2681316a693cb87 not found: ID does not exist" containerID="96165b00721064373f74c948d38a08a14ac8f872e9e7a657e2681316a693cb87" Dec 13 03:51:22 crc kubenswrapper[4766]: I1213 03:51:22.713367 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96165b00721064373f74c948d38a08a14ac8f872e9e7a657e2681316a693cb87"} err="failed to get container status \"96165b00721064373f74c948d38a08a14ac8f872e9e7a657e2681316a693cb87\": rpc error: code = NotFound desc = could not find container \"96165b00721064373f74c948d38a08a14ac8f872e9e7a657e2681316a693cb87\": container with ID starting with 96165b00721064373f74c948d38a08a14ac8f872e9e7a657e2681316a693cb87 not found: ID does not exist" Dec 13 03:51:23 crc kubenswrapper[4766]: I1213 03:51:23.625467 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" path="/var/lib/kubelet/pods/16b3386a-f52e-47b8-a0cb-172dd34f4761/volumes" Dec 13 03:51:23 crc kubenswrapper[4766]: I1213 03:51:23.627976 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" path="/var/lib/kubelet/pods/35afc352-db00-48b9-b888-6b8b9bc36403/volumes" Dec 13 03:51:26 crc kubenswrapper[4766]: I1213 03:51:26.310006 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-sdb2l"] Dec 13 03:51:26 crc kubenswrapper[4766]: I1213 03:51:26.310740 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d6f97d578-sdb2l" podUID="32825ea0-7e19-4846-8c42-7b4fb6b89c03" containerName="controller-manager" containerID="cri-o://a91bf1f074b1045f007ae344a2041de248557ac99f06b6d45b477584f5b6efab" gracePeriod=30 Dec 13 03:51:26 crc kubenswrapper[4766]: I1213 03:51:26.627150 4766 generic.go:334] "Generic (PLEG): container finished" podID="32825ea0-7e19-4846-8c42-7b4fb6b89c03" containerID="a91bf1f074b1045f007ae344a2041de248557ac99f06b6d45b477584f5b6efab" exitCode=0 Dec 13 03:51:26 crc kubenswrapper[4766]: I1213 03:51:26.627199 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d6f97d578-sdb2l" event={"ID":"32825ea0-7e19-4846-8c42-7b4fb6b89c03","Type":"ContainerDied","Data":"a91bf1f074b1045f007ae344a2041de248557ac99f06b6d45b477584f5b6efab"} Dec 13 03:51:26 crc kubenswrapper[4766]: I1213 03:51:26.749472 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d6f97d578-sdb2l" Dec 13 03:51:26 crc kubenswrapper[4766]: I1213 03:51:26.895139 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32825ea0-7e19-4846-8c42-7b4fb6b89c03-proxy-ca-bundles\") pod \"32825ea0-7e19-4846-8c42-7b4fb6b89c03\" (UID: \"32825ea0-7e19-4846-8c42-7b4fb6b89c03\") " Dec 13 03:51:26 crc kubenswrapper[4766]: I1213 03:51:26.895267 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmmvw\" (UniqueName: \"kubernetes.io/projected/32825ea0-7e19-4846-8c42-7b4fb6b89c03-kube-api-access-vmmvw\") pod \"32825ea0-7e19-4846-8c42-7b4fb6b89c03\" (UID: \"32825ea0-7e19-4846-8c42-7b4fb6b89c03\") " Dec 13 03:51:26 crc kubenswrapper[4766]: I1213 03:51:26.895298 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32825ea0-7e19-4846-8c42-7b4fb6b89c03-config\") pod \"32825ea0-7e19-4846-8c42-7b4fb6b89c03\" (UID: \"32825ea0-7e19-4846-8c42-7b4fb6b89c03\") " Dec 13 03:51:26 crc kubenswrapper[4766]: I1213 03:51:26.895318 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32825ea0-7e19-4846-8c42-7b4fb6b89c03-client-ca\") pod \"32825ea0-7e19-4846-8c42-7b4fb6b89c03\" (UID: \"32825ea0-7e19-4846-8c42-7b4fb6b89c03\") " Dec 13 03:51:26 crc kubenswrapper[4766]: I1213 03:51:26.895336 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32825ea0-7e19-4846-8c42-7b4fb6b89c03-serving-cert\") pod \"32825ea0-7e19-4846-8c42-7b4fb6b89c03\" (UID: \"32825ea0-7e19-4846-8c42-7b4fb6b89c03\") " Dec 13 03:51:26 crc kubenswrapper[4766]: I1213 03:51:26.896289 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32825ea0-7e19-4846-8c42-7b4fb6b89c03-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "32825ea0-7e19-4846-8c42-7b4fb6b89c03" (UID: "32825ea0-7e19-4846-8c42-7b4fb6b89c03"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:51:26 crc kubenswrapper[4766]: I1213 03:51:26.896319 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32825ea0-7e19-4846-8c42-7b4fb6b89c03-client-ca" (OuterVolumeSpecName: "client-ca") pod "32825ea0-7e19-4846-8c42-7b4fb6b89c03" (UID: "32825ea0-7e19-4846-8c42-7b4fb6b89c03"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:51:26 crc kubenswrapper[4766]: I1213 03:51:26.896413 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32825ea0-7e19-4846-8c42-7b4fb6b89c03-config" (OuterVolumeSpecName: "config") pod "32825ea0-7e19-4846-8c42-7b4fb6b89c03" (UID: "32825ea0-7e19-4846-8c42-7b4fb6b89c03"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:51:26 crc kubenswrapper[4766]: I1213 03:51:26.901155 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32825ea0-7e19-4846-8c42-7b4fb6b89c03-kube-api-access-vmmvw" (OuterVolumeSpecName: "kube-api-access-vmmvw") pod "32825ea0-7e19-4846-8c42-7b4fb6b89c03" (UID: "32825ea0-7e19-4846-8c42-7b4fb6b89c03"). InnerVolumeSpecName "kube-api-access-vmmvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:51:26 crc kubenswrapper[4766]: I1213 03:51:26.901422 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32825ea0-7e19-4846-8c42-7b4fb6b89c03-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "32825ea0-7e19-4846-8c42-7b4fb6b89c03" (UID: "32825ea0-7e19-4846-8c42-7b4fb6b89c03"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:51:26 crc kubenswrapper[4766]: I1213 03:51:26.997304 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32825ea0-7e19-4846-8c42-7b4fb6b89c03-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:26 crc kubenswrapper[4766]: I1213 03:51:26.997638 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmmvw\" (UniqueName: \"kubernetes.io/projected/32825ea0-7e19-4846-8c42-7b4fb6b89c03-kube-api-access-vmmvw\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:26 crc kubenswrapper[4766]: I1213 03:51:26.997655 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32825ea0-7e19-4846-8c42-7b4fb6b89c03-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:26 crc kubenswrapper[4766]: I1213 03:51:26.997666 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32825ea0-7e19-4846-8c42-7b4fb6b89c03-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:26 crc kubenswrapper[4766]: I1213 03:51:26.997674 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32825ea0-7e19-4846-8c42-7b4fb6b89c03-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:27 crc kubenswrapper[4766]: I1213 03:51:27.637771 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d6f97d578-sdb2l" event={"ID":"32825ea0-7e19-4846-8c42-7b4fb6b89c03","Type":"ContainerDied","Data":"677d51d1170c7962ec4ecddc6043c675b7e34ddefefe59bc7bf1697d65db207b"} Dec 13 03:51:27 crc kubenswrapper[4766]: I1213 03:51:27.637868 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d6f97d578-sdb2l" Dec 13 03:51:27 crc kubenswrapper[4766]: I1213 03:51:27.637879 4766 scope.go:117] "RemoveContainer" containerID="a91bf1f074b1045f007ae344a2041de248557ac99f06b6d45b477584f5b6efab" Dec 13 03:51:27 crc kubenswrapper[4766]: I1213 03:51:27.671479 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-sdb2l"] Dec 13 03:51:27 crc kubenswrapper[4766]: I1213 03:51:27.681618 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-sdb2l"] Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.307146 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-747d6fd8f4-bnk4j"] Dec 13 03:51:28 crc kubenswrapper[4766]: E1213 03:51:28.307784 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" containerName="extract-content" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.307802 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" containerName="extract-content" Dec 13 03:51:28 crc kubenswrapper[4766]: E1213 03:51:28.307816 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" containerName="extract-content" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.307826 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" containerName="extract-content" Dec 13 03:51:28 crc kubenswrapper[4766]: E1213 03:51:28.307844 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" containerName="registry-server" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.307853 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" containerName="registry-server" Dec 13 03:51:28 crc kubenswrapper[4766]: E1213 03:51:28.307865 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" containerName="extract-content" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.307873 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" containerName="extract-content" Dec 13 03:51:28 crc kubenswrapper[4766]: E1213 03:51:28.307882 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" containerName="extract-content" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.307890 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" containerName="extract-content" Dec 13 03:51:28 crc kubenswrapper[4766]: E1213 03:51:28.307903 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" containerName="registry-server" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.307910 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" containerName="registry-server" Dec 13 03:51:28 crc kubenswrapper[4766]: E1213 03:51:28.307922 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" containerName="extract-utilities" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.307931 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" containerName="extract-utilities" Dec 13 03:51:28 crc kubenswrapper[4766]: E1213 03:51:28.307941 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32825ea0-7e19-4846-8c42-7b4fb6b89c03" containerName="controller-manager" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.307949 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="32825ea0-7e19-4846-8c42-7b4fb6b89c03" containerName="controller-manager" Dec 13 03:51:28 crc kubenswrapper[4766]: E1213 03:51:28.307965 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" containerName="extract-utilities" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.307974 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" containerName="extract-utilities" Dec 13 03:51:28 crc kubenswrapper[4766]: E1213 03:51:28.307987 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" containerName="extract-utilities" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.307995 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" containerName="extract-utilities" Dec 13 03:51:28 crc kubenswrapper[4766]: E1213 03:51:28.308007 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" containerName="registry-server" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.308015 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" containerName="registry-server" Dec 13 03:51:28 crc kubenswrapper[4766]: E1213 03:51:28.308028 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" containerName="registry-server" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.308037 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" containerName="registry-server" Dec 13 03:51:28 crc kubenswrapper[4766]: E1213 03:51:28.308050 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" containerName="extract-utilities" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.308058 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" containerName="extract-utilities" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.308183 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c50096c-0297-4e96-868c-d34cfc326d46" containerName="registry-server" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.308202 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="16b3386a-f52e-47b8-a0cb-172dd34f4761" containerName="registry-server" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.308221 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="35afc352-db00-48b9-b888-6b8b9bc36403" containerName="registry-server" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.308233 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="32825ea0-7e19-4846-8c42-7b4fb6b89c03" containerName="controller-manager" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.308243 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="116c9e00-53c9-4d49-813e-2f0dd2b24411" containerName="registry-server" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.308756 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-747d6fd8f4-bnk4j" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.311762 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.314382 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.314394 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.314407 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.314595 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.314931 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.323042 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.330463 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-747d6fd8f4-bnk4j"] Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.426734 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpf5r\" (UniqueName: \"kubernetes.io/projected/1ab62b62-7f5c-4f48-b239-a86cd4f98ff1-kube-api-access-lpf5r\") pod \"controller-manager-747d6fd8f4-bnk4j\" (UID: \"1ab62b62-7f5c-4f48-b239-a86cd4f98ff1\") " pod="openshift-controller-manager/controller-manager-747d6fd8f4-bnk4j" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.426788 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ab62b62-7f5c-4f48-b239-a86cd4f98ff1-serving-cert\") pod \"controller-manager-747d6fd8f4-bnk4j\" (UID: \"1ab62b62-7f5c-4f48-b239-a86cd4f98ff1\") " pod="openshift-controller-manager/controller-manager-747d6fd8f4-bnk4j" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.426825 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ab62b62-7f5c-4f48-b239-a86cd4f98ff1-config\") pod \"controller-manager-747d6fd8f4-bnk4j\" (UID: \"1ab62b62-7f5c-4f48-b239-a86cd4f98ff1\") " pod="openshift-controller-manager/controller-manager-747d6fd8f4-bnk4j" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.426865 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ab62b62-7f5c-4f48-b239-a86cd4f98ff1-client-ca\") pod \"controller-manager-747d6fd8f4-bnk4j\" (UID: \"1ab62b62-7f5c-4f48-b239-a86cd4f98ff1\") " pod="openshift-controller-manager/controller-manager-747d6fd8f4-bnk4j" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.426908 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ab62b62-7f5c-4f48-b239-a86cd4f98ff1-proxy-ca-bundles\") pod \"controller-manager-747d6fd8f4-bnk4j\" (UID: \"1ab62b62-7f5c-4f48-b239-a86cd4f98ff1\") " pod="openshift-controller-manager/controller-manager-747d6fd8f4-bnk4j" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.528716 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpf5r\" (UniqueName: \"kubernetes.io/projected/1ab62b62-7f5c-4f48-b239-a86cd4f98ff1-kube-api-access-lpf5r\") pod \"controller-manager-747d6fd8f4-bnk4j\" (UID: \"1ab62b62-7f5c-4f48-b239-a86cd4f98ff1\") " pod="openshift-controller-manager/controller-manager-747d6fd8f4-bnk4j" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.528770 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ab62b62-7f5c-4f48-b239-a86cd4f98ff1-serving-cert\") pod \"controller-manager-747d6fd8f4-bnk4j\" (UID: \"1ab62b62-7f5c-4f48-b239-a86cd4f98ff1\") " pod="openshift-controller-manager/controller-manager-747d6fd8f4-bnk4j" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.528806 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ab62b62-7f5c-4f48-b239-a86cd4f98ff1-config\") pod \"controller-manager-747d6fd8f4-bnk4j\" (UID: \"1ab62b62-7f5c-4f48-b239-a86cd4f98ff1\") " pod="openshift-controller-manager/controller-manager-747d6fd8f4-bnk4j" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.528842 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ab62b62-7f5c-4f48-b239-a86cd4f98ff1-client-ca\") pod \"controller-manager-747d6fd8f4-bnk4j\" (UID: \"1ab62b62-7f5c-4f48-b239-a86cd4f98ff1\") " pod="openshift-controller-manager/controller-manager-747d6fd8f4-bnk4j" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.528887 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ab62b62-7f5c-4f48-b239-a86cd4f98ff1-proxy-ca-bundles\") pod \"controller-manager-747d6fd8f4-bnk4j\" (UID: \"1ab62b62-7f5c-4f48-b239-a86cd4f98ff1\") " pod="openshift-controller-manager/controller-manager-747d6fd8f4-bnk4j" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.530531 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ab62b62-7f5c-4f48-b239-a86cd4f98ff1-proxy-ca-bundles\") pod \"controller-manager-747d6fd8f4-bnk4j\" (UID: \"1ab62b62-7f5c-4f48-b239-a86cd4f98ff1\") " pod="openshift-controller-manager/controller-manager-747d6fd8f4-bnk4j" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.532904 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ab62b62-7f5c-4f48-b239-a86cd4f98ff1-config\") pod \"controller-manager-747d6fd8f4-bnk4j\" (UID: \"1ab62b62-7f5c-4f48-b239-a86cd4f98ff1\") " pod="openshift-controller-manager/controller-manager-747d6fd8f4-bnk4j" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.533608 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ab62b62-7f5c-4f48-b239-a86cd4f98ff1-client-ca\") pod \"controller-manager-747d6fd8f4-bnk4j\" (UID: \"1ab62b62-7f5c-4f48-b239-a86cd4f98ff1\") " pod="openshift-controller-manager/controller-manager-747d6fd8f4-bnk4j" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.538598 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ab62b62-7f5c-4f48-b239-a86cd4f98ff1-serving-cert\") pod \"controller-manager-747d6fd8f4-bnk4j\" (UID: \"1ab62b62-7f5c-4f48-b239-a86cd4f98ff1\") " pod="openshift-controller-manager/controller-manager-747d6fd8f4-bnk4j" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.547582 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpf5r\" (UniqueName: \"kubernetes.io/projected/1ab62b62-7f5c-4f48-b239-a86cd4f98ff1-kube-api-access-lpf5r\") pod \"controller-manager-747d6fd8f4-bnk4j\" (UID: \"1ab62b62-7f5c-4f48-b239-a86cd4f98ff1\") " pod="openshift-controller-manager/controller-manager-747d6fd8f4-bnk4j" Dec 13 03:51:28 crc kubenswrapper[4766]: I1213 03:51:28.624988 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-747d6fd8f4-bnk4j" Dec 13 03:51:29 crc kubenswrapper[4766]: I1213 03:51:29.145208 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-747d6fd8f4-bnk4j"] Dec 13 03:51:29 crc kubenswrapper[4766]: W1213 03:51:29.150096 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ab62b62_7f5c_4f48_b239_a86cd4f98ff1.slice/crio-c1308519c74650f26a405b0aac6726e4fb619071abe5a2a2eb24dcee649cb442 WatchSource:0}: Error finding container c1308519c74650f26a405b0aac6726e4fb619071abe5a2a2eb24dcee649cb442: Status 404 returned error can't find the container with id c1308519c74650f26a405b0aac6726e4fb619071abe5a2a2eb24dcee649cb442 Dec 13 03:51:29 crc kubenswrapper[4766]: I1213 03:51:29.425236 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-fs4sf" Dec 13 03:51:29 crc kubenswrapper[4766]: I1213 03:51:29.518457 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5fm8f"] Dec 13 03:51:29 crc kubenswrapper[4766]: I1213 03:51:29.629093 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32825ea0-7e19-4846-8c42-7b4fb6b89c03" path="/var/lib/kubelet/pods/32825ea0-7e19-4846-8c42-7b4fb6b89c03/volumes" Dec 13 03:51:29 crc kubenswrapper[4766]: I1213 03:51:29.820783 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-747d6fd8f4-bnk4j" event={"ID":"1ab62b62-7f5c-4f48-b239-a86cd4f98ff1","Type":"ContainerStarted","Data":"6fd4abbb73fe0f7f58dc2698e081d6fa55ea6e0d344195e615398930245b7bed"} Dec 13 03:51:29 crc kubenswrapper[4766]: I1213 03:51:29.820857 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-747d6fd8f4-bnk4j" event={"ID":"1ab62b62-7f5c-4f48-b239-a86cd4f98ff1","Type":"ContainerStarted","Data":"c1308519c74650f26a405b0aac6726e4fb619071abe5a2a2eb24dcee649cb442"} Dec 13 03:51:29 crc kubenswrapper[4766]: I1213 03:51:29.821513 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-747d6fd8f4-bnk4j" Dec 13 03:51:29 crc kubenswrapper[4766]: I1213 03:51:29.826991 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-747d6fd8f4-bnk4j" Dec 13 03:51:29 crc kubenswrapper[4766]: I1213 03:51:29.849046 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-747d6fd8f4-bnk4j" podStartSLOduration=3.849024157 podStartE2EDuration="3.849024157s" podCreationTimestamp="2025-12-13 03:51:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:51:29.847826552 +0000 UTC m=+421.357759516" watchObservedRunningTime="2025-12-13 03:51:29.849024157 +0000 UTC m=+421.358957131" Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.154647 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p7x6l"] Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.156001 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p7x6l" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" containerName="registry-server" containerID="cri-o://07d72fd5d681cfbc9f206f30a484559cf20fbe192fcc3f268f3a47c355e28590" gracePeriod=30 Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.168932 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6dqhc"] Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.169274 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6dqhc" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" containerName="registry-server" containerID="cri-o://332d60252bbe8ac4131f2c43d4dc0e44c5f341ac1dfd02ec4e9de9da98877e1d" gracePeriod=30 Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.174491 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d47ng"] Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.174882 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-d47ng" podUID="3c6809b6-c67a-45cf-b251-f20b62790313" containerName="marketplace-operator" containerID="cri-o://0e8f9a87a9111faaf38fccb060fc08a3d716f62346f7aa3c5c393fa6ecc35ae0" gracePeriod=30 Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.181214 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fsl6d"] Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.181540 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fsl6d" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" containerName="registry-server" containerID="cri-o://d1d0f79669b994734c82a0410f431e7a31dd43f0692e633ff3283ff6fe979d2d" gracePeriod=30 Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.189701 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8d4vh"] Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.189970 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8d4vh" podUID="b65de837-4baa-4aff-98cb-2babbdfdb2f5" containerName="registry-server" containerID="cri-o://f7d798871931121442b564bd42ce9229e9d8324b70a0cb38c7efe95be729ebe7" gracePeriod=30 Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.204783 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8fdvl"] Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.206284 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8fdvl" Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.220332 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8fdvl"] Dec 13 03:51:34 crc kubenswrapper[4766]: E1213 03:51:34.308238 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f7d798871931121442b564bd42ce9229e9d8324b70a0cb38c7efe95be729ebe7 is running failed: container process not found" containerID="f7d798871931121442b564bd42ce9229e9d8324b70a0cb38c7efe95be729ebe7" cmd=["grpc_health_probe","-addr=:50051"] Dec 13 03:51:34 crc kubenswrapper[4766]: E1213 03:51:34.308868 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f7d798871931121442b564bd42ce9229e9d8324b70a0cb38c7efe95be729ebe7 is running failed: container process not found" containerID="f7d798871931121442b564bd42ce9229e9d8324b70a0cb38c7efe95be729ebe7" cmd=["grpc_health_probe","-addr=:50051"] Dec 13 03:51:34 crc kubenswrapper[4766]: E1213 03:51:34.309187 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f7d798871931121442b564bd42ce9229e9d8324b70a0cb38c7efe95be729ebe7 is running failed: container process not found" containerID="f7d798871931121442b564bd42ce9229e9d8324b70a0cb38c7efe95be729ebe7" cmd=["grpc_health_probe","-addr=:50051"] Dec 13 03:51:34 crc kubenswrapper[4766]: E1213 03:51:34.309378 4766 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f7d798871931121442b564bd42ce9229e9d8324b70a0cb38c7efe95be729ebe7 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-8d4vh" podUID="b65de837-4baa-4aff-98cb-2babbdfdb2f5" containerName="registry-server" Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.329216 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b992d64d-b068-4d78-aac9-7e0ff5eda198-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8fdvl\" (UID: \"b992d64d-b068-4d78-aac9-7e0ff5eda198\") " pod="openshift-marketplace/marketplace-operator-79b997595-8fdvl" Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.329300 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnx2f\" (UniqueName: \"kubernetes.io/projected/b992d64d-b068-4d78-aac9-7e0ff5eda198-kube-api-access-rnx2f\") pod \"marketplace-operator-79b997595-8fdvl\" (UID: \"b992d64d-b068-4d78-aac9-7e0ff5eda198\") " pod="openshift-marketplace/marketplace-operator-79b997595-8fdvl" Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.329497 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b992d64d-b068-4d78-aac9-7e0ff5eda198-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8fdvl\" (UID: \"b992d64d-b068-4d78-aac9-7e0ff5eda198\") " pod="openshift-marketplace/marketplace-operator-79b997595-8fdvl" Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.431044 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b992d64d-b068-4d78-aac9-7e0ff5eda198-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8fdvl\" (UID: \"b992d64d-b068-4d78-aac9-7e0ff5eda198\") " pod="openshift-marketplace/marketplace-operator-79b997595-8fdvl" Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.431126 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnx2f\" (UniqueName: \"kubernetes.io/projected/b992d64d-b068-4d78-aac9-7e0ff5eda198-kube-api-access-rnx2f\") pod \"marketplace-operator-79b997595-8fdvl\" (UID: \"b992d64d-b068-4d78-aac9-7e0ff5eda198\") " pod="openshift-marketplace/marketplace-operator-79b997595-8fdvl" Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.431171 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b992d64d-b068-4d78-aac9-7e0ff5eda198-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8fdvl\" (UID: \"b992d64d-b068-4d78-aac9-7e0ff5eda198\") " pod="openshift-marketplace/marketplace-operator-79b997595-8fdvl" Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.432699 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b992d64d-b068-4d78-aac9-7e0ff5eda198-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8fdvl\" (UID: \"b992d64d-b068-4d78-aac9-7e0ff5eda198\") " pod="openshift-marketplace/marketplace-operator-79b997595-8fdvl" Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.441371 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b992d64d-b068-4d78-aac9-7e0ff5eda198-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8fdvl\" (UID: \"b992d64d-b068-4d78-aac9-7e0ff5eda198\") " pod="openshift-marketplace/marketplace-operator-79b997595-8fdvl" Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.449780 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnx2f\" (UniqueName: \"kubernetes.io/projected/b992d64d-b068-4d78-aac9-7e0ff5eda198-kube-api-access-rnx2f\") pod \"marketplace-operator-79b997595-8fdvl\" (UID: \"b992d64d-b068-4d78-aac9-7e0ff5eda198\") " pod="openshift-marketplace/marketplace-operator-79b997595-8fdvl" Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.539865 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8fdvl" Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.865895 4766 generic.go:334] "Generic (PLEG): container finished" podID="3c6809b6-c67a-45cf-b251-f20b62790313" containerID="0e8f9a87a9111faaf38fccb060fc08a3d716f62346f7aa3c5c393fa6ecc35ae0" exitCode=0 Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.866264 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d47ng" event={"ID":"3c6809b6-c67a-45cf-b251-f20b62790313","Type":"ContainerDied","Data":"0e8f9a87a9111faaf38fccb060fc08a3d716f62346f7aa3c5c393fa6ecc35ae0"} Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.875049 4766 generic.go:334] "Generic (PLEG): container finished" podID="b65de837-4baa-4aff-98cb-2babbdfdb2f5" containerID="f7d798871931121442b564bd42ce9229e9d8324b70a0cb38c7efe95be729ebe7" exitCode=0 Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.875149 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8d4vh" event={"ID":"b65de837-4baa-4aff-98cb-2babbdfdb2f5","Type":"ContainerDied","Data":"f7d798871931121442b564bd42ce9229e9d8324b70a0cb38c7efe95be729ebe7"} Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.878372 4766 generic.go:334] "Generic (PLEG): container finished" podID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" containerID="07d72fd5d681cfbc9f206f30a484559cf20fbe192fcc3f268f3a47c355e28590" exitCode=0 Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.878465 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7x6l" event={"ID":"456b5fc9-3dab-40fc-81c5-ab9ea1f110dd","Type":"ContainerDied","Data":"07d72fd5d681cfbc9f206f30a484559cf20fbe192fcc3f268f3a47c355e28590"} Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.882214 4766 generic.go:334] "Generic (PLEG): container finished" podID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" containerID="d1d0f79669b994734c82a0410f431e7a31dd43f0692e633ff3283ff6fe979d2d" exitCode=0 Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.882294 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fsl6d" event={"ID":"cf9aebbf-5f0a-4684-b6fc-a85c909dbb34","Type":"ContainerDied","Data":"d1d0f79669b994734c82a0410f431e7a31dd43f0692e633ff3283ff6fe979d2d"} Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.885756 4766 generic.go:334] "Generic (PLEG): container finished" podID="f162311d-72df-42b4-b586-7bc1d4945c99" containerID="332d60252bbe8ac4131f2c43d4dc0e44c5f341ac1dfd02ec4e9de9da98877e1d" exitCode=0 Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.885791 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dqhc" event={"ID":"f162311d-72df-42b4-b586-7bc1d4945c99","Type":"ContainerDied","Data":"332d60252bbe8ac4131f2c43d4dc0e44c5f341ac1dfd02ec4e9de9da98877e1d"} Dec 13 03:51:34 crc kubenswrapper[4766]: I1213 03:51:34.987101 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8fdvl"] Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.334261 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6dqhc" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.460105 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f162311d-72df-42b4-b586-7bc1d4945c99-utilities\") pod \"f162311d-72df-42b4-b586-7bc1d4945c99\" (UID: \"f162311d-72df-42b4-b586-7bc1d4945c99\") " Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.460216 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f162311d-72df-42b4-b586-7bc1d4945c99-catalog-content\") pod \"f162311d-72df-42b4-b586-7bc1d4945c99\" (UID: \"f162311d-72df-42b4-b586-7bc1d4945c99\") " Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.460276 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68z9n\" (UniqueName: \"kubernetes.io/projected/f162311d-72df-42b4-b586-7bc1d4945c99-kube-api-access-68z9n\") pod \"f162311d-72df-42b4-b586-7bc1d4945c99\" (UID: \"f162311d-72df-42b4-b586-7bc1d4945c99\") " Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.461731 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f162311d-72df-42b4-b586-7bc1d4945c99-utilities" (OuterVolumeSpecName: "utilities") pod "f162311d-72df-42b4-b586-7bc1d4945c99" (UID: "f162311d-72df-42b4-b586-7bc1d4945c99"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.468678 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f162311d-72df-42b4-b586-7bc1d4945c99-kube-api-access-68z9n" (OuterVolumeSpecName: "kube-api-access-68z9n") pod "f162311d-72df-42b4-b586-7bc1d4945c99" (UID: "f162311d-72df-42b4-b586-7bc1d4945c99"). InnerVolumeSpecName "kube-api-access-68z9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.520381 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d47ng" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.527346 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8d4vh" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.553261 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fsl6d" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.557124 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p7x6l" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.562123 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f162311d-72df-42b4-b586-7bc1d4945c99-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.562153 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68z9n\" (UniqueName: \"kubernetes.io/projected/f162311d-72df-42b4-b586-7bc1d4945c99-kube-api-access-68z9n\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.566107 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f162311d-72df-42b4-b586-7bc1d4945c99-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f162311d-72df-42b4-b586-7bc1d4945c99" (UID: "f162311d-72df-42b4-b586-7bc1d4945c99"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.662755 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zksbl\" (UniqueName: \"kubernetes.io/projected/456b5fc9-3dab-40fc-81c5-ab9ea1f110dd-kube-api-access-zksbl\") pod \"456b5fc9-3dab-40fc-81c5-ab9ea1f110dd\" (UID: \"456b5fc9-3dab-40fc-81c5-ab9ea1f110dd\") " Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.662946 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b65de837-4baa-4aff-98cb-2babbdfdb2f5-utilities\") pod \"b65de837-4baa-4aff-98cb-2babbdfdb2f5\" (UID: \"b65de837-4baa-4aff-98cb-2babbdfdb2f5\") " Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.663082 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/456b5fc9-3dab-40fc-81c5-ab9ea1f110dd-utilities\") pod \"456b5fc9-3dab-40fc-81c5-ab9ea1f110dd\" (UID: \"456b5fc9-3dab-40fc-81c5-ab9ea1f110dd\") " Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.663204 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf9aebbf-5f0a-4684-b6fc-a85c909dbb34-catalog-content\") pod \"cf9aebbf-5f0a-4684-b6fc-a85c909dbb34\" (UID: \"cf9aebbf-5f0a-4684-b6fc-a85c909dbb34\") " Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.663307 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf9aebbf-5f0a-4684-b6fc-a85c909dbb34-utilities\") pod \"cf9aebbf-5f0a-4684-b6fc-a85c909dbb34\" (UID: \"cf9aebbf-5f0a-4684-b6fc-a85c909dbb34\") " Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.663398 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3c6809b6-c67a-45cf-b251-f20b62790313-marketplace-trusted-ca\") pod \"3c6809b6-c67a-45cf-b251-f20b62790313\" (UID: \"3c6809b6-c67a-45cf-b251-f20b62790313\") " Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.663509 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2c6z\" (UniqueName: \"kubernetes.io/projected/cf9aebbf-5f0a-4684-b6fc-a85c909dbb34-kube-api-access-n2c6z\") pod \"cf9aebbf-5f0a-4684-b6fc-a85c909dbb34\" (UID: \"cf9aebbf-5f0a-4684-b6fc-a85c909dbb34\") " Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.663587 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8q56\" (UniqueName: \"kubernetes.io/projected/b65de837-4baa-4aff-98cb-2babbdfdb2f5-kube-api-access-f8q56\") pod \"b65de837-4baa-4aff-98cb-2babbdfdb2f5\" (UID: \"b65de837-4baa-4aff-98cb-2babbdfdb2f5\") " Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.663705 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/456b5fc9-3dab-40fc-81c5-ab9ea1f110dd-catalog-content\") pod \"456b5fc9-3dab-40fc-81c5-ab9ea1f110dd\" (UID: \"456b5fc9-3dab-40fc-81c5-ab9ea1f110dd\") " Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.663871 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3c6809b6-c67a-45cf-b251-f20b62790313-marketplace-operator-metrics\") pod \"3c6809b6-c67a-45cf-b251-f20b62790313\" (UID: \"3c6809b6-c67a-45cf-b251-f20b62790313\") " Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.663951 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z88h4\" (UniqueName: \"kubernetes.io/projected/3c6809b6-c67a-45cf-b251-f20b62790313-kube-api-access-z88h4\") pod \"3c6809b6-c67a-45cf-b251-f20b62790313\" (UID: \"3c6809b6-c67a-45cf-b251-f20b62790313\") " Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.664051 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b65de837-4baa-4aff-98cb-2babbdfdb2f5-catalog-content\") pod \"b65de837-4baa-4aff-98cb-2babbdfdb2f5\" (UID: \"b65de837-4baa-4aff-98cb-2babbdfdb2f5\") " Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.663740 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b65de837-4baa-4aff-98cb-2babbdfdb2f5-utilities" (OuterVolumeSpecName: "utilities") pod "b65de837-4baa-4aff-98cb-2babbdfdb2f5" (UID: "b65de837-4baa-4aff-98cb-2babbdfdb2f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.664097 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf9aebbf-5f0a-4684-b6fc-a85c909dbb34-utilities" (OuterVolumeSpecName: "utilities") pod "cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" (UID: "cf9aebbf-5f0a-4684-b6fc-a85c909dbb34"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.664169 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c6809b6-c67a-45cf-b251-f20b62790313-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "3c6809b6-c67a-45cf-b251-f20b62790313" (UID: "3c6809b6-c67a-45cf-b251-f20b62790313"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.664531 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b65de837-4baa-4aff-98cb-2babbdfdb2f5-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.664622 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f162311d-72df-42b4-b586-7bc1d4945c99-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.664690 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf9aebbf-5f0a-4684-b6fc-a85c909dbb34-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.664754 4766 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3c6809b6-c67a-45cf-b251-f20b62790313-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.666064 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/456b5fc9-3dab-40fc-81c5-ab9ea1f110dd-utilities" (OuterVolumeSpecName: "utilities") pod "456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" (UID: "456b5fc9-3dab-40fc-81c5-ab9ea1f110dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.680374 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/456b5fc9-3dab-40fc-81c5-ab9ea1f110dd-kube-api-access-zksbl" (OuterVolumeSpecName: "kube-api-access-zksbl") pod "456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" (UID: "456b5fc9-3dab-40fc-81c5-ab9ea1f110dd"). InnerVolumeSpecName "kube-api-access-zksbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.680398 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b65de837-4baa-4aff-98cb-2babbdfdb2f5-kube-api-access-f8q56" (OuterVolumeSpecName: "kube-api-access-f8q56") pod "b65de837-4baa-4aff-98cb-2babbdfdb2f5" (UID: "b65de837-4baa-4aff-98cb-2babbdfdb2f5"). InnerVolumeSpecName "kube-api-access-f8q56". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.680841 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c6809b6-c67a-45cf-b251-f20b62790313-kube-api-access-z88h4" (OuterVolumeSpecName: "kube-api-access-z88h4") pod "3c6809b6-c67a-45cf-b251-f20b62790313" (UID: "3c6809b6-c67a-45cf-b251-f20b62790313"). InnerVolumeSpecName "kube-api-access-z88h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.683418 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c6809b6-c67a-45cf-b251-f20b62790313-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "3c6809b6-c67a-45cf-b251-f20b62790313" (UID: "3c6809b6-c67a-45cf-b251-f20b62790313"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.692550 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf9aebbf-5f0a-4684-b6fc-a85c909dbb34-kube-api-access-n2c6z" (OuterVolumeSpecName: "kube-api-access-n2c6z") pod "cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" (UID: "cf9aebbf-5f0a-4684-b6fc-a85c909dbb34"). InnerVolumeSpecName "kube-api-access-n2c6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.702050 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf9aebbf-5f0a-4684-b6fc-a85c909dbb34-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" (UID: "cf9aebbf-5f0a-4684-b6fc-a85c909dbb34"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.757344 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/456b5fc9-3dab-40fc-81c5-ab9ea1f110dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" (UID: "456b5fc9-3dab-40fc-81c5-ab9ea1f110dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.767286 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zksbl\" (UniqueName: \"kubernetes.io/projected/456b5fc9-3dab-40fc-81c5-ab9ea1f110dd-kube-api-access-zksbl\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.767349 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/456b5fc9-3dab-40fc-81c5-ab9ea1f110dd-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.767368 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf9aebbf-5f0a-4684-b6fc-a85c909dbb34-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.767380 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2c6z\" (UniqueName: \"kubernetes.io/projected/cf9aebbf-5f0a-4684-b6fc-a85c909dbb34-kube-api-access-n2c6z\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.767392 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8q56\" (UniqueName: \"kubernetes.io/projected/b65de837-4baa-4aff-98cb-2babbdfdb2f5-kube-api-access-f8q56\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.767401 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/456b5fc9-3dab-40fc-81c5-ab9ea1f110dd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.767412 4766 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3c6809b6-c67a-45cf-b251-f20b62790313-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.767442 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z88h4\" (UniqueName: \"kubernetes.io/projected/3c6809b6-c67a-45cf-b251-f20b62790313-kube-api-access-z88h4\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.835774 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b65de837-4baa-4aff-98cb-2babbdfdb2f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b65de837-4baa-4aff-98cb-2babbdfdb2f5" (UID: "b65de837-4baa-4aff-98cb-2babbdfdb2f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.868905 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b65de837-4baa-4aff-98cb-2babbdfdb2f5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.893716 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fsl6d" event={"ID":"cf9aebbf-5f0a-4684-b6fc-a85c909dbb34","Type":"ContainerDied","Data":"ba78dde592a2cfc27f8d60762a94e1f7bebe0babdaa260adb0e9cfa9b0d20afe"} Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.893741 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fsl6d" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.893801 4766 scope.go:117] "RemoveContainer" containerID="d1d0f79669b994734c82a0410f431e7a31dd43f0692e633ff3283ff6fe979d2d" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.901655 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dqhc" event={"ID":"f162311d-72df-42b4-b586-7bc1d4945c99","Type":"ContainerDied","Data":"1b13a5d9feb0b99e379077bfdce898afb44558fe3ef292b0f9f3b2fdb20feaa3"} Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.901793 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6dqhc" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.905382 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8fdvl" event={"ID":"b992d64d-b068-4d78-aac9-7e0ff5eda198","Type":"ContainerStarted","Data":"a88941fa59e4b921522d71fd9ac75724e06289ed4f79a830cee48baf389d2a5f"} Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.905455 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8fdvl" event={"ID":"b992d64d-b068-4d78-aac9-7e0ff5eda198","Type":"ContainerStarted","Data":"a9ba281cc400dfbfbdcd5d0ea7e2506730dd716769d939762483bd5501746a88"} Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.907583 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-8fdvl" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.909315 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d47ng" event={"ID":"3c6809b6-c67a-45cf-b251-f20b62790313","Type":"ContainerDied","Data":"bc16cadfd51a66dd20a680355103a5fca50dae53b80186e7ce82b56f10892baf"} Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.909392 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d47ng" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.911231 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-8fdvl" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.914023 4766 scope.go:117] "RemoveContainer" containerID="c7f194d8329170c8ebc3429e8fa51099ae65e2ac87c52a3a3a8fbdda69b5ed8a" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.914667 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8d4vh" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.914752 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8d4vh" event={"ID":"b65de837-4baa-4aff-98cb-2babbdfdb2f5","Type":"ContainerDied","Data":"c3a29ff0b6baeb73419c2d155856060e25e14270f0218520d75986cdfb288024"} Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.918977 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7x6l" event={"ID":"456b5fc9-3dab-40fc-81c5-ab9ea1f110dd","Type":"ContainerDied","Data":"c197731db52926c0a92ac5cb94047ffc4f13c61fdc3cb4e13ce8043598dcd839"} Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.919067 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p7x6l" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.931884 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-8fdvl" podStartSLOduration=1.93186258 podStartE2EDuration="1.93186258s" podCreationTimestamp="2025-12-13 03:51:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:51:35.923764384 +0000 UTC m=+427.433697348" watchObservedRunningTime="2025-12-13 03:51:35.93186258 +0000 UTC m=+427.441795544" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.941824 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6dqhc"] Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.945991 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6dqhc"] Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.951619 4766 scope.go:117] "RemoveContainer" containerID="785ce32cb67032b6e16a79f53c11a4502b38cf6321d3c525a0a6c3670da42d0b" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.958364 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fsl6d"] Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.963466 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fsl6d"] Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.981616 4766 scope.go:117] "RemoveContainer" containerID="332d60252bbe8ac4131f2c43d4dc0e44c5f341ac1dfd02ec4e9de9da98877e1d" Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.994904 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d47ng"] Dec 13 03:51:35 crc kubenswrapper[4766]: I1213 03:51:35.998511 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d47ng"] Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.003542 4766 scope.go:117] "RemoveContainer" containerID="fcee0a230f0613ad67d95c1d7ca81d2c2c5e09b5a9bc42c7619e84afd08a5295" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.020662 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8d4vh"] Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.024031 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8d4vh"] Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.031312 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p7x6l"] Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.031483 4766 scope.go:117] "RemoveContainer" containerID="c08c7649abe1a4d54b232e131e0e67a724cb5092b87aebc996737e1605b1c219" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.038171 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p7x6l"] Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.049958 4766 scope.go:117] "RemoveContainer" containerID="0e8f9a87a9111faaf38fccb060fc08a3d716f62346f7aa3c5c393fa6ecc35ae0" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.067378 4766 scope.go:117] "RemoveContainer" containerID="f7d798871931121442b564bd42ce9229e9d8324b70a0cb38c7efe95be729ebe7" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.084899 4766 scope.go:117] "RemoveContainer" containerID="29b7f5aa398c96c8afac60f36c7d2e6981fa9d2993913774b3898e829cf0dbcd" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.101301 4766 scope.go:117] "RemoveContainer" containerID="b0f6b29b253a7d0ba62ff72ddafae66c92c41b1aad72eb4cf955033a6d9a90ed" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.115344 4766 scope.go:117] "RemoveContainer" containerID="07d72fd5d681cfbc9f206f30a484559cf20fbe192fcc3f268f3a47c355e28590" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.131776 4766 scope.go:117] "RemoveContainer" containerID="12aef1a3a7fb5c27671ca060ddc3adb44ed400eeed55c5ecb3034609cdf90d76" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.148286 4766 scope.go:117] "RemoveContainer" containerID="7f463a63294b1047028824a05a7b16c5a6c86e5d6e214890de78854956c964cc" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.770519 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4fp7m"] Dec 13 03:51:36 crc kubenswrapper[4766]: E1213 03:51:36.770763 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" containerName="registry-server" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.770785 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" containerName="registry-server" Dec 13 03:51:36 crc kubenswrapper[4766]: E1213 03:51:36.770804 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" containerName="extract-content" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.770813 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" containerName="extract-content" Dec 13 03:51:36 crc kubenswrapper[4766]: E1213 03:51:36.770824 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" containerName="extract-utilities" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.770833 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" containerName="extract-utilities" Dec 13 03:51:36 crc kubenswrapper[4766]: E1213 03:51:36.770845 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" containerName="registry-server" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.770853 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" containerName="registry-server" Dec 13 03:51:36 crc kubenswrapper[4766]: E1213 03:51:36.770869 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" containerName="extract-content" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.770877 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" containerName="extract-content" Dec 13 03:51:36 crc kubenswrapper[4766]: E1213 03:51:36.770887 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b65de837-4baa-4aff-98cb-2babbdfdb2f5" containerName="extract-content" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.770894 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b65de837-4baa-4aff-98cb-2babbdfdb2f5" containerName="extract-content" Dec 13 03:51:36 crc kubenswrapper[4766]: E1213 03:51:36.770907 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" containerName="extract-content" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.770915 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" containerName="extract-content" Dec 13 03:51:36 crc kubenswrapper[4766]: E1213 03:51:36.770926 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b65de837-4baa-4aff-98cb-2babbdfdb2f5" containerName="extract-utilities" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.770933 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b65de837-4baa-4aff-98cb-2babbdfdb2f5" containerName="extract-utilities" Dec 13 03:51:36 crc kubenswrapper[4766]: E1213 03:51:36.770943 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b65de837-4baa-4aff-98cb-2babbdfdb2f5" containerName="registry-server" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.770950 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b65de837-4baa-4aff-98cb-2babbdfdb2f5" containerName="registry-server" Dec 13 03:51:36 crc kubenswrapper[4766]: E1213 03:51:36.770958 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" containerName="extract-utilities" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.770966 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" containerName="extract-utilities" Dec 13 03:51:36 crc kubenswrapper[4766]: E1213 03:51:36.770977 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" containerName="registry-server" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.770984 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" containerName="registry-server" Dec 13 03:51:36 crc kubenswrapper[4766]: E1213 03:51:36.770995 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" containerName="extract-utilities" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.771002 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" containerName="extract-utilities" Dec 13 03:51:36 crc kubenswrapper[4766]: E1213 03:51:36.771012 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c6809b6-c67a-45cf-b251-f20b62790313" containerName="marketplace-operator" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.771021 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c6809b6-c67a-45cf-b251-f20b62790313" containerName="marketplace-operator" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.771138 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c6809b6-c67a-45cf-b251-f20b62790313" containerName="marketplace-operator" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.771158 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="b65de837-4baa-4aff-98cb-2babbdfdb2f5" containerName="registry-server" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.771170 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" containerName="registry-server" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.771178 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" containerName="registry-server" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.771187 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" containerName="registry-server" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.772026 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4fp7m" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.774379 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.784529 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4fp7m"] Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.881479 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pxc6\" (UniqueName: \"kubernetes.io/projected/388cd2e4-3bb2-4972-be1a-cc0dcc346746-kube-api-access-2pxc6\") pod \"redhat-operators-4fp7m\" (UID: \"388cd2e4-3bb2-4972-be1a-cc0dcc346746\") " pod="openshift-marketplace/redhat-operators-4fp7m" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.881726 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/388cd2e4-3bb2-4972-be1a-cc0dcc346746-utilities\") pod \"redhat-operators-4fp7m\" (UID: \"388cd2e4-3bb2-4972-be1a-cc0dcc346746\") " pod="openshift-marketplace/redhat-operators-4fp7m" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.881833 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/388cd2e4-3bb2-4972-be1a-cc0dcc346746-catalog-content\") pod \"redhat-operators-4fp7m\" (UID: \"388cd2e4-3bb2-4972-be1a-cc0dcc346746\") " pod="openshift-marketplace/redhat-operators-4fp7m" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.984330 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/388cd2e4-3bb2-4972-be1a-cc0dcc346746-catalog-content\") pod \"redhat-operators-4fp7m\" (UID: \"388cd2e4-3bb2-4972-be1a-cc0dcc346746\") " pod="openshift-marketplace/redhat-operators-4fp7m" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.984415 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pxc6\" (UniqueName: \"kubernetes.io/projected/388cd2e4-3bb2-4972-be1a-cc0dcc346746-kube-api-access-2pxc6\") pod \"redhat-operators-4fp7m\" (UID: \"388cd2e4-3bb2-4972-be1a-cc0dcc346746\") " pod="openshift-marketplace/redhat-operators-4fp7m" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.984464 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/388cd2e4-3bb2-4972-be1a-cc0dcc346746-utilities\") pod \"redhat-operators-4fp7m\" (UID: \"388cd2e4-3bb2-4972-be1a-cc0dcc346746\") " pod="openshift-marketplace/redhat-operators-4fp7m" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.984996 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/388cd2e4-3bb2-4972-be1a-cc0dcc346746-utilities\") pod \"redhat-operators-4fp7m\" (UID: \"388cd2e4-3bb2-4972-be1a-cc0dcc346746\") " pod="openshift-marketplace/redhat-operators-4fp7m" Dec 13 03:51:36 crc kubenswrapper[4766]: I1213 03:51:36.985109 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/388cd2e4-3bb2-4972-be1a-cc0dcc346746-catalog-content\") pod \"redhat-operators-4fp7m\" (UID: \"388cd2e4-3bb2-4972-be1a-cc0dcc346746\") " pod="openshift-marketplace/redhat-operators-4fp7m" Dec 13 03:51:37 crc kubenswrapper[4766]: I1213 03:51:37.012882 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pxc6\" (UniqueName: \"kubernetes.io/projected/388cd2e4-3bb2-4972-be1a-cc0dcc346746-kube-api-access-2pxc6\") pod \"redhat-operators-4fp7m\" (UID: \"388cd2e4-3bb2-4972-be1a-cc0dcc346746\") " pod="openshift-marketplace/redhat-operators-4fp7m" Dec 13 03:51:37 crc kubenswrapper[4766]: I1213 03:51:37.114935 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4fp7m" Dec 13 03:51:37 crc kubenswrapper[4766]: I1213 03:51:37.507232 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4fp7m"] Dec 13 03:51:37 crc kubenswrapper[4766]: W1213 03:51:37.519662 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod388cd2e4_3bb2_4972_be1a_cc0dcc346746.slice/crio-c7ecf47fac4306f2e8eadc1ef2cc785d706f1f41dea2cdf3f30e78504bee24cb WatchSource:0}: Error finding container c7ecf47fac4306f2e8eadc1ef2cc785d706f1f41dea2cdf3f30e78504bee24cb: Status 404 returned error can't find the container with id c7ecf47fac4306f2e8eadc1ef2cc785d706f1f41dea2cdf3f30e78504bee24cb Dec 13 03:51:37 crc kubenswrapper[4766]: I1213 03:51:37.625120 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c6809b6-c67a-45cf-b251-f20b62790313" path="/var/lib/kubelet/pods/3c6809b6-c67a-45cf-b251-f20b62790313/volumes" Dec 13 03:51:37 crc kubenswrapper[4766]: I1213 03:51:37.626248 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="456b5fc9-3dab-40fc-81c5-ab9ea1f110dd" path="/var/lib/kubelet/pods/456b5fc9-3dab-40fc-81c5-ab9ea1f110dd/volumes" Dec 13 03:51:37 crc kubenswrapper[4766]: I1213 03:51:37.626959 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b65de837-4baa-4aff-98cb-2babbdfdb2f5" path="/var/lib/kubelet/pods/b65de837-4baa-4aff-98cb-2babbdfdb2f5/volumes" Dec 13 03:51:37 crc kubenswrapper[4766]: I1213 03:51:37.628277 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf9aebbf-5f0a-4684-b6fc-a85c909dbb34" path="/var/lib/kubelet/pods/cf9aebbf-5f0a-4684-b6fc-a85c909dbb34/volumes" Dec 13 03:51:37 crc kubenswrapper[4766]: I1213 03:51:37.628973 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f162311d-72df-42b4-b586-7bc1d4945c99" path="/var/lib/kubelet/pods/f162311d-72df-42b4-b586-7bc1d4945c99/volumes" Dec 13 03:51:37 crc kubenswrapper[4766]: I1213 03:51:37.938846 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fp7m" event={"ID":"388cd2e4-3bb2-4972-be1a-cc0dcc346746","Type":"ContainerStarted","Data":"c7ecf47fac4306f2e8eadc1ef2cc785d706f1f41dea2cdf3f30e78504bee24cb"} Dec 13 03:51:37 crc kubenswrapper[4766]: I1213 03:51:37.971104 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s2k56"] Dec 13 03:51:37 crc kubenswrapper[4766]: I1213 03:51:37.972300 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s2k56" Dec 13 03:51:37 crc kubenswrapper[4766]: I1213 03:51:37.974696 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 13 03:51:37 crc kubenswrapper[4766]: I1213 03:51:37.984081 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s2k56"] Dec 13 03:51:38 crc kubenswrapper[4766]: I1213 03:51:38.097663 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d6c441f-c934-4352-8997-84aa50668ac0-utilities\") pod \"certified-operators-s2k56\" (UID: \"7d6c441f-c934-4352-8997-84aa50668ac0\") " pod="openshift-marketplace/certified-operators-s2k56" Dec 13 03:51:38 crc kubenswrapper[4766]: I1213 03:51:38.098035 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d6c441f-c934-4352-8997-84aa50668ac0-catalog-content\") pod \"certified-operators-s2k56\" (UID: \"7d6c441f-c934-4352-8997-84aa50668ac0\") " pod="openshift-marketplace/certified-operators-s2k56" Dec 13 03:51:38 crc kubenswrapper[4766]: I1213 03:51:38.098151 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfmbc\" (UniqueName: \"kubernetes.io/projected/7d6c441f-c934-4352-8997-84aa50668ac0-kube-api-access-bfmbc\") pod \"certified-operators-s2k56\" (UID: \"7d6c441f-c934-4352-8997-84aa50668ac0\") " pod="openshift-marketplace/certified-operators-s2k56" Dec 13 03:51:38 crc kubenswrapper[4766]: I1213 03:51:38.199273 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d6c441f-c934-4352-8997-84aa50668ac0-utilities\") pod \"certified-operators-s2k56\" (UID: \"7d6c441f-c934-4352-8997-84aa50668ac0\") " pod="openshift-marketplace/certified-operators-s2k56" Dec 13 03:51:38 crc kubenswrapper[4766]: I1213 03:51:38.199361 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d6c441f-c934-4352-8997-84aa50668ac0-catalog-content\") pod \"certified-operators-s2k56\" (UID: \"7d6c441f-c934-4352-8997-84aa50668ac0\") " pod="openshift-marketplace/certified-operators-s2k56" Dec 13 03:51:38 crc kubenswrapper[4766]: I1213 03:51:38.199445 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfmbc\" (UniqueName: \"kubernetes.io/projected/7d6c441f-c934-4352-8997-84aa50668ac0-kube-api-access-bfmbc\") pod \"certified-operators-s2k56\" (UID: \"7d6c441f-c934-4352-8997-84aa50668ac0\") " pod="openshift-marketplace/certified-operators-s2k56" Dec 13 03:51:38 crc kubenswrapper[4766]: I1213 03:51:38.199899 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d6c441f-c934-4352-8997-84aa50668ac0-utilities\") pod \"certified-operators-s2k56\" (UID: \"7d6c441f-c934-4352-8997-84aa50668ac0\") " pod="openshift-marketplace/certified-operators-s2k56" Dec 13 03:51:38 crc kubenswrapper[4766]: I1213 03:51:38.200454 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d6c441f-c934-4352-8997-84aa50668ac0-catalog-content\") pod \"certified-operators-s2k56\" (UID: \"7d6c441f-c934-4352-8997-84aa50668ac0\") " pod="openshift-marketplace/certified-operators-s2k56" Dec 13 03:51:38 crc kubenswrapper[4766]: I1213 03:51:38.219721 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfmbc\" (UniqueName: \"kubernetes.io/projected/7d6c441f-c934-4352-8997-84aa50668ac0-kube-api-access-bfmbc\") pod \"certified-operators-s2k56\" (UID: \"7d6c441f-c934-4352-8997-84aa50668ac0\") " pod="openshift-marketplace/certified-operators-s2k56" Dec 13 03:51:38 crc kubenswrapper[4766]: I1213 03:51:38.295516 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s2k56" Dec 13 03:51:38 crc kubenswrapper[4766]: I1213 03:51:38.730961 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s2k56"] Dec 13 03:51:38 crc kubenswrapper[4766]: I1213 03:51:38.945829 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2k56" event={"ID":"7d6c441f-c934-4352-8997-84aa50668ac0","Type":"ContainerStarted","Data":"c1e30cd5f3e5d87fc1e72da1e79414c416395cdb1a2b8e8086c53dd3d20be8dd"} Dec 13 03:51:39 crc kubenswrapper[4766]: I1213 03:51:39.370800 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tsghz"] Dec 13 03:51:39 crc kubenswrapper[4766]: I1213 03:51:39.372000 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tsghz" Dec 13 03:51:39 crc kubenswrapper[4766]: I1213 03:51:39.374841 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 13 03:51:39 crc kubenswrapper[4766]: I1213 03:51:39.382634 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tsghz"] Dec 13 03:51:39 crc kubenswrapper[4766]: I1213 03:51:39.425203 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/077b3190-346b-4de2-ae4a-b10ef4c0f635-catalog-content\") pod \"community-operators-tsghz\" (UID: \"077b3190-346b-4de2-ae4a-b10ef4c0f635\") " pod="openshift-marketplace/community-operators-tsghz" Dec 13 03:51:39 crc kubenswrapper[4766]: I1213 03:51:39.425272 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp6s9\" (UniqueName: \"kubernetes.io/projected/077b3190-346b-4de2-ae4a-b10ef4c0f635-kube-api-access-kp6s9\") pod \"community-operators-tsghz\" (UID: \"077b3190-346b-4de2-ae4a-b10ef4c0f635\") " pod="openshift-marketplace/community-operators-tsghz" Dec 13 03:51:39 crc kubenswrapper[4766]: I1213 03:51:39.425414 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/077b3190-346b-4de2-ae4a-b10ef4c0f635-utilities\") pod \"community-operators-tsghz\" (UID: \"077b3190-346b-4de2-ae4a-b10ef4c0f635\") " pod="openshift-marketplace/community-operators-tsghz" Dec 13 03:51:39 crc kubenswrapper[4766]: I1213 03:51:39.526887 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp6s9\" (UniqueName: \"kubernetes.io/projected/077b3190-346b-4de2-ae4a-b10ef4c0f635-kube-api-access-kp6s9\") pod \"community-operators-tsghz\" (UID: \"077b3190-346b-4de2-ae4a-b10ef4c0f635\") " pod="openshift-marketplace/community-operators-tsghz" Dec 13 03:51:39 crc kubenswrapper[4766]: I1213 03:51:39.527230 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/077b3190-346b-4de2-ae4a-b10ef4c0f635-utilities\") pod \"community-operators-tsghz\" (UID: \"077b3190-346b-4de2-ae4a-b10ef4c0f635\") " pod="openshift-marketplace/community-operators-tsghz" Dec 13 03:51:39 crc kubenswrapper[4766]: I1213 03:51:39.527330 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/077b3190-346b-4de2-ae4a-b10ef4c0f635-catalog-content\") pod \"community-operators-tsghz\" (UID: \"077b3190-346b-4de2-ae4a-b10ef4c0f635\") " pod="openshift-marketplace/community-operators-tsghz" Dec 13 03:51:39 crc kubenswrapper[4766]: I1213 03:51:39.527816 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/077b3190-346b-4de2-ae4a-b10ef4c0f635-utilities\") pod \"community-operators-tsghz\" (UID: \"077b3190-346b-4de2-ae4a-b10ef4c0f635\") " pod="openshift-marketplace/community-operators-tsghz" Dec 13 03:51:39 crc kubenswrapper[4766]: I1213 03:51:39.527849 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/077b3190-346b-4de2-ae4a-b10ef4c0f635-catalog-content\") pod \"community-operators-tsghz\" (UID: \"077b3190-346b-4de2-ae4a-b10ef4c0f635\") " pod="openshift-marketplace/community-operators-tsghz" Dec 13 03:51:39 crc kubenswrapper[4766]: I1213 03:51:39.546977 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp6s9\" (UniqueName: \"kubernetes.io/projected/077b3190-346b-4de2-ae4a-b10ef4c0f635-kube-api-access-kp6s9\") pod \"community-operators-tsghz\" (UID: \"077b3190-346b-4de2-ae4a-b10ef4c0f635\") " pod="openshift-marketplace/community-operators-tsghz" Dec 13 03:51:39 crc kubenswrapper[4766]: I1213 03:51:39.705174 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tsghz" Dec 13 03:51:39 crc kubenswrapper[4766]: I1213 03:51:39.732212 4766 patch_prober.go:28] interesting pod/machine-config-daemon-94w9l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 03:51:39 crc kubenswrapper[4766]: I1213 03:51:39.732292 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 03:51:39 crc kubenswrapper[4766]: I1213 03:51:39.732345 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" Dec 13 03:51:39 crc kubenswrapper[4766]: I1213 03:51:39.733032 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ca44fbdb60b7c4e21f7f23576ab2a08072b8d79ddd151b8f9523f542ad2e0779"} pod="openshift-machine-config-operator/machine-config-daemon-94w9l" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 13 03:51:39 crc kubenswrapper[4766]: I1213 03:51:39.733101 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" containerID="cri-o://ca44fbdb60b7c4e21f7f23576ab2a08072b8d79ddd151b8f9523f542ad2e0779" gracePeriod=600 Dec 13 03:51:39 crc kubenswrapper[4766]: I1213 03:51:39.959233 4766 generic.go:334] "Generic (PLEG): container finished" podID="7d6c441f-c934-4352-8997-84aa50668ac0" containerID="c668d2fef3033ad96d9907adf96f79f9ac086621cb5376e95c34c4a043da1f6a" exitCode=0 Dec 13 03:51:39 crc kubenswrapper[4766]: I1213 03:51:39.959291 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2k56" event={"ID":"7d6c441f-c934-4352-8997-84aa50668ac0","Type":"ContainerDied","Data":"c668d2fef3033ad96d9907adf96f79f9ac086621cb5376e95c34c4a043da1f6a"} Dec 13 03:51:39 crc kubenswrapper[4766]: I1213 03:51:39.963634 4766 generic.go:334] "Generic (PLEG): container finished" podID="388cd2e4-3bb2-4972-be1a-cc0dcc346746" containerID="5fff4fd40a8a26a51bb67af24333ac245157960ff56a9693ad4ba82009e05e90" exitCode=0 Dec 13 03:51:39 crc kubenswrapper[4766]: I1213 03:51:39.963830 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fp7m" event={"ID":"388cd2e4-3bb2-4972-be1a-cc0dcc346746","Type":"ContainerDied","Data":"5fff4fd40a8a26a51bb67af24333ac245157960ff56a9693ad4ba82009e05e90"} Dec 13 03:51:39 crc kubenswrapper[4766]: I1213 03:51:39.970923 4766 generic.go:334] "Generic (PLEG): container finished" podID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerID="ca44fbdb60b7c4e21f7f23576ab2a08072b8d79ddd151b8f9523f542ad2e0779" exitCode=0 Dec 13 03:51:39 crc kubenswrapper[4766]: I1213 03:51:39.970972 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" event={"ID":"71e6a48b-4f5d-4299-9c7b-98dbe11e670e","Type":"ContainerDied","Data":"ca44fbdb60b7c4e21f7f23576ab2a08072b8d79ddd151b8f9523f542ad2e0779"} Dec 13 03:51:39 crc kubenswrapper[4766]: I1213 03:51:39.971104 4766 scope.go:117] "RemoveContainer" containerID="e9773315398fed8d96bd9123b496f579e53c9c3a78f9c7e5706580e7ac40b420" Dec 13 03:51:40 crc kubenswrapper[4766]: I1213 03:51:40.139755 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tsghz"] Dec 13 03:51:40 crc kubenswrapper[4766]: W1213 03:51:40.150573 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod077b3190_346b_4de2_ae4a_b10ef4c0f635.slice/crio-d7cef93d87aa381c2953af84f81db9c8eaf492ad921b613dfb9220b87c56189d WatchSource:0}: Error finding container d7cef93d87aa381c2953af84f81db9c8eaf492ad921b613dfb9220b87c56189d: Status 404 returned error can't find the container with id d7cef93d87aa381c2953af84f81db9c8eaf492ad921b613dfb9220b87c56189d Dec 13 03:51:40 crc kubenswrapper[4766]: I1213 03:51:40.439403 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v844g"] Dec 13 03:51:40 crc kubenswrapper[4766]: I1213 03:51:40.440604 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v844g" Dec 13 03:51:40 crc kubenswrapper[4766]: I1213 03:51:40.443174 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 13 03:51:40 crc kubenswrapper[4766]: I1213 03:51:40.452391 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v844g"] Dec 13 03:51:40 crc kubenswrapper[4766]: I1213 03:51:40.543216 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgh48\" (UniqueName: \"kubernetes.io/projected/38153d6a-6577-469c-aa93-6eb38dd85064-kube-api-access-fgh48\") pod \"redhat-marketplace-v844g\" (UID: \"38153d6a-6577-469c-aa93-6eb38dd85064\") " pod="openshift-marketplace/redhat-marketplace-v844g" Dec 13 03:51:40 crc kubenswrapper[4766]: I1213 03:51:40.543701 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38153d6a-6577-469c-aa93-6eb38dd85064-catalog-content\") pod \"redhat-marketplace-v844g\" (UID: \"38153d6a-6577-469c-aa93-6eb38dd85064\") " pod="openshift-marketplace/redhat-marketplace-v844g" Dec 13 03:51:40 crc kubenswrapper[4766]: I1213 03:51:40.543820 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38153d6a-6577-469c-aa93-6eb38dd85064-utilities\") pod \"redhat-marketplace-v844g\" (UID: \"38153d6a-6577-469c-aa93-6eb38dd85064\") " pod="openshift-marketplace/redhat-marketplace-v844g" Dec 13 03:51:40 crc kubenswrapper[4766]: I1213 03:51:40.648180 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38153d6a-6577-469c-aa93-6eb38dd85064-catalog-content\") pod \"redhat-marketplace-v844g\" (UID: \"38153d6a-6577-469c-aa93-6eb38dd85064\") " pod="openshift-marketplace/redhat-marketplace-v844g" Dec 13 03:51:40 crc kubenswrapper[4766]: I1213 03:51:40.648296 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38153d6a-6577-469c-aa93-6eb38dd85064-utilities\") pod \"redhat-marketplace-v844g\" (UID: \"38153d6a-6577-469c-aa93-6eb38dd85064\") " pod="openshift-marketplace/redhat-marketplace-v844g" Dec 13 03:51:40 crc kubenswrapper[4766]: I1213 03:51:40.648359 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgh48\" (UniqueName: \"kubernetes.io/projected/38153d6a-6577-469c-aa93-6eb38dd85064-kube-api-access-fgh48\") pod \"redhat-marketplace-v844g\" (UID: \"38153d6a-6577-469c-aa93-6eb38dd85064\") " pod="openshift-marketplace/redhat-marketplace-v844g" Dec 13 03:51:40 crc kubenswrapper[4766]: I1213 03:51:40.649275 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38153d6a-6577-469c-aa93-6eb38dd85064-utilities\") pod \"redhat-marketplace-v844g\" (UID: \"38153d6a-6577-469c-aa93-6eb38dd85064\") " pod="openshift-marketplace/redhat-marketplace-v844g" Dec 13 03:51:40 crc kubenswrapper[4766]: I1213 03:51:40.649287 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38153d6a-6577-469c-aa93-6eb38dd85064-catalog-content\") pod \"redhat-marketplace-v844g\" (UID: \"38153d6a-6577-469c-aa93-6eb38dd85064\") " pod="openshift-marketplace/redhat-marketplace-v844g" Dec 13 03:51:40 crc kubenswrapper[4766]: I1213 03:51:40.678011 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgh48\" (UniqueName: \"kubernetes.io/projected/38153d6a-6577-469c-aa93-6eb38dd85064-kube-api-access-fgh48\") pod \"redhat-marketplace-v844g\" (UID: \"38153d6a-6577-469c-aa93-6eb38dd85064\") " pod="openshift-marketplace/redhat-marketplace-v844g" Dec 13 03:51:40 crc kubenswrapper[4766]: I1213 03:51:40.794271 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v844g" Dec 13 03:51:40 crc kubenswrapper[4766]: I1213 03:51:40.997817 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2k56" event={"ID":"7d6c441f-c934-4352-8997-84aa50668ac0","Type":"ContainerStarted","Data":"e11e637b558aa20db43f8decba1c716b4643335a8504211dd194ec5047e35694"} Dec 13 03:51:41 crc kubenswrapper[4766]: I1213 03:51:41.005054 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" event={"ID":"71e6a48b-4f5d-4299-9c7b-98dbe11e670e","Type":"ContainerStarted","Data":"3581901a9e5d232fe2bee9f853467e0cc1f606b0f931756d9ef9ab621fab6fda"} Dec 13 03:51:41 crc kubenswrapper[4766]: I1213 03:51:41.010236 4766 generic.go:334] "Generic (PLEG): container finished" podID="077b3190-346b-4de2-ae4a-b10ef4c0f635" containerID="b6b4acc3ff4cb802a67c0dae476c6282b3d21df5d1f92d957c8e50097d5037d2" exitCode=0 Dec 13 03:51:41 crc kubenswrapper[4766]: I1213 03:51:41.010286 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsghz" event={"ID":"077b3190-346b-4de2-ae4a-b10ef4c0f635","Type":"ContainerDied","Data":"b6b4acc3ff4cb802a67c0dae476c6282b3d21df5d1f92d957c8e50097d5037d2"} Dec 13 03:51:41 crc kubenswrapper[4766]: I1213 03:51:41.010317 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsghz" event={"ID":"077b3190-346b-4de2-ae4a-b10ef4c0f635","Type":"ContainerStarted","Data":"d7cef93d87aa381c2953af84f81db9c8eaf492ad921b613dfb9220b87c56189d"} Dec 13 03:51:41 crc kubenswrapper[4766]: I1213 03:51:41.269179 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v844g"] Dec 13 03:51:42 crc kubenswrapper[4766]: I1213 03:51:42.016322 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v844g" event={"ID":"38153d6a-6577-469c-aa93-6eb38dd85064","Type":"ContainerStarted","Data":"be95dd6f0b09506dc17199338be67249ab2d8edbf9ba02c070c1d1f14c26e059"} Dec 13 03:51:42 crc kubenswrapper[4766]: I1213 03:51:42.018604 4766 generic.go:334] "Generic (PLEG): container finished" podID="7d6c441f-c934-4352-8997-84aa50668ac0" containerID="e11e637b558aa20db43f8decba1c716b4643335a8504211dd194ec5047e35694" exitCode=0 Dec 13 03:51:42 crc kubenswrapper[4766]: I1213 03:51:42.018659 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2k56" event={"ID":"7d6c441f-c934-4352-8997-84aa50668ac0","Type":"ContainerDied","Data":"e11e637b558aa20db43f8decba1c716b4643335a8504211dd194ec5047e35694"} Dec 13 03:51:42 crc kubenswrapper[4766]: I1213 03:51:42.023288 4766 generic.go:334] "Generic (PLEG): container finished" podID="388cd2e4-3bb2-4972-be1a-cc0dcc346746" containerID="39b84aa8f7e72528d547935c24c44cd0feefc34f9c6a4ae4ae27e59e702365a9" exitCode=0 Dec 13 03:51:42 crc kubenswrapper[4766]: I1213 03:51:42.024570 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fp7m" event={"ID":"388cd2e4-3bb2-4972-be1a-cc0dcc346746","Type":"ContainerDied","Data":"39b84aa8f7e72528d547935c24c44cd0feefc34f9c6a4ae4ae27e59e702365a9"} Dec 13 03:51:43 crc kubenswrapper[4766]: I1213 03:51:43.031163 4766 generic.go:334] "Generic (PLEG): container finished" podID="38153d6a-6577-469c-aa93-6eb38dd85064" containerID="35a2bfa90377cefb4de192242e7f284b9cd88ee37dcfb5746e167f9cbcaefda0" exitCode=0 Dec 13 03:51:43 crc kubenswrapper[4766]: I1213 03:51:43.031230 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v844g" event={"ID":"38153d6a-6577-469c-aa93-6eb38dd85064","Type":"ContainerDied","Data":"35a2bfa90377cefb4de192242e7f284b9cd88ee37dcfb5746e167f9cbcaefda0"} Dec 13 03:51:45 crc kubenswrapper[4766]: I1213 03:51:45.049055 4766 generic.go:334] "Generic (PLEG): container finished" podID="38153d6a-6577-469c-aa93-6eb38dd85064" containerID="5de1f04fd3bbfe1dbdbcfabe9d50e9fd8d94995e8c1ef777559a17f7e5ab7975" exitCode=0 Dec 13 03:51:45 crc kubenswrapper[4766]: I1213 03:51:45.049210 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v844g" event={"ID":"38153d6a-6577-469c-aa93-6eb38dd85064","Type":"ContainerDied","Data":"5de1f04fd3bbfe1dbdbcfabe9d50e9fd8d94995e8c1ef777559a17f7e5ab7975"} Dec 13 03:51:45 crc kubenswrapper[4766]: I1213 03:51:45.051930 4766 generic.go:334] "Generic (PLEG): container finished" podID="077b3190-346b-4de2-ae4a-b10ef4c0f635" containerID="68e5c969eb33dca119be73fc733e8928e4c03e8e10d1de0fe6c53f1f8ee55324" exitCode=0 Dec 13 03:51:45 crc kubenswrapper[4766]: I1213 03:51:45.052048 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsghz" event={"ID":"077b3190-346b-4de2-ae4a-b10ef4c0f635","Type":"ContainerDied","Data":"68e5c969eb33dca119be73fc733e8928e4c03e8e10d1de0fe6c53f1f8ee55324"} Dec 13 03:51:45 crc kubenswrapper[4766]: I1213 03:51:45.056706 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2k56" event={"ID":"7d6c441f-c934-4352-8997-84aa50668ac0","Type":"ContainerStarted","Data":"dab1b9ada21fbba4eca21795c979a164211c586329e1ce9c519ce9ad8c43757b"} Dec 13 03:51:45 crc kubenswrapper[4766]: I1213 03:51:45.059361 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fp7m" event={"ID":"388cd2e4-3bb2-4972-be1a-cc0dcc346746","Type":"ContainerStarted","Data":"4de5220c367d8ad2e2dc90fe637c9f02de956bf8fa605803f784a2ea00e405fb"} Dec 13 03:51:45 crc kubenswrapper[4766]: I1213 03:51:45.112233 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s2k56" podStartSLOduration=3.724310077 podStartE2EDuration="8.11221539s" podCreationTimestamp="2025-12-13 03:51:37 +0000 UTC" firstStartedPulling="2025-12-13 03:51:39.961371608 +0000 UTC m=+431.471304572" lastFinishedPulling="2025-12-13 03:51:44.349276921 +0000 UTC m=+435.859209885" observedRunningTime="2025-12-13 03:51:45.110212222 +0000 UTC m=+436.620145186" watchObservedRunningTime="2025-12-13 03:51:45.11221539 +0000 UTC m=+436.622148354" Dec 13 03:51:45 crc kubenswrapper[4766]: I1213 03:51:45.135108 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4fp7m" podStartSLOduration=4.97942417 podStartE2EDuration="9.135069614s" podCreationTimestamp="2025-12-13 03:51:36 +0000 UTC" firstStartedPulling="2025-12-13 03:51:39.965514248 +0000 UTC m=+431.475447212" lastFinishedPulling="2025-12-13 03:51:44.121159702 +0000 UTC m=+435.631092656" observedRunningTime="2025-12-13 03:51:45.134762925 +0000 UTC m=+436.644695899" watchObservedRunningTime="2025-12-13 03:51:45.135069614 +0000 UTC m=+436.645002608" Dec 13 03:51:46 crc kubenswrapper[4766]: I1213 03:51:46.067990 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v844g" event={"ID":"38153d6a-6577-469c-aa93-6eb38dd85064","Type":"ContainerStarted","Data":"abb2724775893e27095ea68ec3c95f9286b51c82f4e932bd3f7da19ba5ae2a71"} Dec 13 03:51:46 crc kubenswrapper[4766]: I1213 03:51:46.070730 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsghz" event={"ID":"077b3190-346b-4de2-ae4a-b10ef4c0f635","Type":"ContainerStarted","Data":"0a4fe9da399b733408c67087fe291678f43bb3f3232e524baeb252a66547252d"} Dec 13 03:51:46 crc kubenswrapper[4766]: I1213 03:51:46.092618 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v844g" podStartSLOduration=3.61451997 podStartE2EDuration="6.092593367s" podCreationTimestamp="2025-12-13 03:51:40 +0000 UTC" firstStartedPulling="2025-12-13 03:51:43.034417293 +0000 UTC m=+434.544350257" lastFinishedPulling="2025-12-13 03:51:45.5124907 +0000 UTC m=+437.022423654" observedRunningTime="2025-12-13 03:51:46.08892613 +0000 UTC m=+437.598859114" watchObservedRunningTime="2025-12-13 03:51:46.092593367 +0000 UTC m=+437.602526331" Dec 13 03:51:46 crc kubenswrapper[4766]: I1213 03:51:46.113583 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tsghz" podStartSLOduration=2.412400752 podStartE2EDuration="7.113566166s" podCreationTimestamp="2025-12-13 03:51:39 +0000 UTC" firstStartedPulling="2025-12-13 03:51:41.014712026 +0000 UTC m=+432.524645000" lastFinishedPulling="2025-12-13 03:51:45.71587745 +0000 UTC m=+437.225810414" observedRunningTime="2025-12-13 03:51:46.113092673 +0000 UTC m=+437.623025637" watchObservedRunningTime="2025-12-13 03:51:46.113566166 +0000 UTC m=+437.623499130" Dec 13 03:51:47 crc kubenswrapper[4766]: I1213 03:51:47.115826 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4fp7m" Dec 13 03:51:47 crc kubenswrapper[4766]: I1213 03:51:47.116224 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4fp7m" Dec 13 03:51:48 crc kubenswrapper[4766]: I1213 03:51:48.157968 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4fp7m" podUID="388cd2e4-3bb2-4972-be1a-cc0dcc346746" containerName="registry-server" probeResult="failure" output=< Dec 13 03:51:48 crc kubenswrapper[4766]: timeout: failed to connect service ":50051" within 1s Dec 13 03:51:48 crc kubenswrapper[4766]: > Dec 13 03:51:48 crc kubenswrapper[4766]: I1213 03:51:48.296640 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s2k56" Dec 13 03:51:48 crc kubenswrapper[4766]: I1213 03:51:48.296712 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s2k56" Dec 13 03:51:48 crc kubenswrapper[4766]: I1213 03:51:48.335112 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s2k56" Dec 13 03:51:49 crc kubenswrapper[4766]: I1213 03:51:49.184537 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s2k56" Dec 13 03:51:49 crc kubenswrapper[4766]: I1213 03:51:49.705708 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tsghz" Dec 13 03:51:49 crc kubenswrapper[4766]: I1213 03:51:49.705762 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tsghz" Dec 13 03:51:49 crc kubenswrapper[4766]: I1213 03:51:49.750834 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tsghz" Dec 13 03:51:50 crc kubenswrapper[4766]: I1213 03:51:50.163660 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tsghz" Dec 13 03:51:50 crc kubenswrapper[4766]: I1213 03:51:50.794764 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v844g" Dec 13 03:51:50 crc kubenswrapper[4766]: I1213 03:51:50.794825 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v844g" Dec 13 03:51:50 crc kubenswrapper[4766]: I1213 03:51:50.842270 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v844g" Dec 13 03:51:51 crc kubenswrapper[4766]: I1213 03:51:51.171649 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v844g" Dec 13 03:51:54 crc kubenswrapper[4766]: I1213 03:51:54.559260 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" podUID="342d355f-91c2-4c74-b72e-fa4164314fe1" containerName="registry" containerID="cri-o://bdfbe8c49094e2e5837da53d8773773ce2a52cd549bbb1217200ea950de86eaa" gracePeriod=30 Dec 13 03:51:55 crc kubenswrapper[4766]: I1213 03:51:55.850069 4766 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-5fm8f container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.37:5000/healthz\": dial tcp 10.217.0.37:5000: connect: connection refused" start-of-body= Dec 13 03:51:55 crc kubenswrapper[4766]: I1213 03:51:55.850139 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" podUID="342d355f-91c2-4c74-b72e-fa4164314fe1" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.37:5000/healthz\": dial tcp 10.217.0.37:5000: connect: connection refused" Dec 13 03:51:57 crc kubenswrapper[4766]: I1213 03:51:57.160407 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4fp7m" Dec 13 03:51:57 crc kubenswrapper[4766]: I1213 03:51:57.169295 4766 generic.go:334] "Generic (PLEG): container finished" podID="342d355f-91c2-4c74-b72e-fa4164314fe1" containerID="bdfbe8c49094e2e5837da53d8773773ce2a52cd549bbb1217200ea950de86eaa" exitCode=0 Dec 13 03:51:57 crc kubenswrapper[4766]: I1213 03:51:57.169396 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" event={"ID":"342d355f-91c2-4c74-b72e-fa4164314fe1","Type":"ContainerDied","Data":"bdfbe8c49094e2e5837da53d8773773ce2a52cd549bbb1217200ea950de86eaa"} Dec 13 03:51:57 crc kubenswrapper[4766]: I1213 03:51:57.213025 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4fp7m" Dec 13 03:51:57 crc kubenswrapper[4766]: I1213 03:51:57.789391 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:51:57 crc kubenswrapper[4766]: I1213 03:51:57.805469 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/342d355f-91c2-4c74-b72e-fa4164314fe1-bound-sa-token\") pod \"342d355f-91c2-4c74-b72e-fa4164314fe1\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " Dec 13 03:51:57 crc kubenswrapper[4766]: I1213 03:51:57.805653 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"342d355f-91c2-4c74-b72e-fa4164314fe1\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " Dec 13 03:51:57 crc kubenswrapper[4766]: I1213 03:51:57.805686 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/342d355f-91c2-4c74-b72e-fa4164314fe1-ca-trust-extracted\") pod \"342d355f-91c2-4c74-b72e-fa4164314fe1\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " Dec 13 03:51:57 crc kubenswrapper[4766]: I1213 03:51:57.806647 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/342d355f-91c2-4c74-b72e-fa4164314fe1-registry-tls\") pod \"342d355f-91c2-4c74-b72e-fa4164314fe1\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " Dec 13 03:51:57 crc kubenswrapper[4766]: I1213 03:51:57.806694 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/342d355f-91c2-4c74-b72e-fa4164314fe1-trusted-ca\") pod \"342d355f-91c2-4c74-b72e-fa4164314fe1\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " Dec 13 03:51:57 crc kubenswrapper[4766]: I1213 03:51:57.806729 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/342d355f-91c2-4c74-b72e-fa4164314fe1-registry-certificates\") pod \"342d355f-91c2-4c74-b72e-fa4164314fe1\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " Dec 13 03:51:57 crc kubenswrapper[4766]: I1213 03:51:57.806753 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws6tp\" (UniqueName: \"kubernetes.io/projected/342d355f-91c2-4c74-b72e-fa4164314fe1-kube-api-access-ws6tp\") pod \"342d355f-91c2-4c74-b72e-fa4164314fe1\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " Dec 13 03:51:57 crc kubenswrapper[4766]: I1213 03:51:57.806788 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/342d355f-91c2-4c74-b72e-fa4164314fe1-installation-pull-secrets\") pod \"342d355f-91c2-4c74-b72e-fa4164314fe1\" (UID: \"342d355f-91c2-4c74-b72e-fa4164314fe1\") " Dec 13 03:51:57 crc kubenswrapper[4766]: I1213 03:51:57.807921 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/342d355f-91c2-4c74-b72e-fa4164314fe1-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "342d355f-91c2-4c74-b72e-fa4164314fe1" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:51:57 crc kubenswrapper[4766]: I1213 03:51:57.808447 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/342d355f-91c2-4c74-b72e-fa4164314fe1-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "342d355f-91c2-4c74-b72e-fa4164314fe1" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:51:57 crc kubenswrapper[4766]: I1213 03:51:57.833691 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/342d355f-91c2-4c74-b72e-fa4164314fe1-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "342d355f-91c2-4c74-b72e-fa4164314fe1" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:51:57 crc kubenswrapper[4766]: I1213 03:51:57.833864 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/342d355f-91c2-4c74-b72e-fa4164314fe1-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "342d355f-91c2-4c74-b72e-fa4164314fe1" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:51:57 crc kubenswrapper[4766]: I1213 03:51:57.834598 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/342d355f-91c2-4c74-b72e-fa4164314fe1-kube-api-access-ws6tp" (OuterVolumeSpecName: "kube-api-access-ws6tp") pod "342d355f-91c2-4c74-b72e-fa4164314fe1" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1"). InnerVolumeSpecName "kube-api-access-ws6tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:51:57 crc kubenswrapper[4766]: I1213 03:51:57.847129 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "342d355f-91c2-4c74-b72e-fa4164314fe1" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 13 03:51:57 crc kubenswrapper[4766]: I1213 03:51:57.847851 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/342d355f-91c2-4c74-b72e-fa4164314fe1-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "342d355f-91c2-4c74-b72e-fa4164314fe1" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:51:57 crc kubenswrapper[4766]: I1213 03:51:57.868099 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/342d355f-91c2-4c74-b72e-fa4164314fe1-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "342d355f-91c2-4c74-b72e-fa4164314fe1" (UID: "342d355f-91c2-4c74-b72e-fa4164314fe1"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 03:51:57 crc kubenswrapper[4766]: I1213 03:51:57.907851 4766 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/342d355f-91c2-4c74-b72e-fa4164314fe1-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:57 crc kubenswrapper[4766]: I1213 03:51:57.907906 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/342d355f-91c2-4c74-b72e-fa4164314fe1-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:57 crc kubenswrapper[4766]: I1213 03:51:57.907919 4766 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/342d355f-91c2-4c74-b72e-fa4164314fe1-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:57 crc kubenswrapper[4766]: I1213 03:51:57.907933 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ws6tp\" (UniqueName: \"kubernetes.io/projected/342d355f-91c2-4c74-b72e-fa4164314fe1-kube-api-access-ws6tp\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:57 crc kubenswrapper[4766]: I1213 03:51:57.907945 4766 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/342d355f-91c2-4c74-b72e-fa4164314fe1-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:57 crc kubenswrapper[4766]: I1213 03:51:57.907956 4766 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/342d355f-91c2-4c74-b72e-fa4164314fe1-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:57 crc kubenswrapper[4766]: I1213 03:51:57.907968 4766 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/342d355f-91c2-4c74-b72e-fa4164314fe1-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 13 03:51:58 crc kubenswrapper[4766]: I1213 03:51:58.177453 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" event={"ID":"342d355f-91c2-4c74-b72e-fa4164314fe1","Type":"ContainerDied","Data":"d53cecfb8952a4c87814db71552cd2856b25916f2a6fe12d59348fb943af325f"} Dec 13 03:51:58 crc kubenswrapper[4766]: I1213 03:51:58.177532 4766 scope.go:117] "RemoveContainer" containerID="bdfbe8c49094e2e5837da53d8773773ce2a52cd549bbb1217200ea950de86eaa" Dec 13 03:51:58 crc kubenswrapper[4766]: I1213 03:51:58.178989 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5fm8f" Dec 13 03:51:58 crc kubenswrapper[4766]: I1213 03:51:58.220205 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5fm8f"] Dec 13 03:51:58 crc kubenswrapper[4766]: I1213 03:51:58.226104 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5fm8f"] Dec 13 03:51:59 crc kubenswrapper[4766]: I1213 03:51:59.624072 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="342d355f-91c2-4c74-b72e-fa4164314fe1" path="/var/lib/kubelet/pods/342d355f-91c2-4c74-b72e-fa4164314fe1/volumes" Dec 13 03:54:09 crc kubenswrapper[4766]: I1213 03:54:09.731963 4766 patch_prober.go:28] interesting pod/machine-config-daemon-94w9l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 03:54:09 crc kubenswrapper[4766]: I1213 03:54:09.732768 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 03:54:39 crc kubenswrapper[4766]: I1213 03:54:39.732113 4766 patch_prober.go:28] interesting pod/machine-config-daemon-94w9l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 03:54:39 crc kubenswrapper[4766]: I1213 03:54:39.733099 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 03:55:09 crc kubenswrapper[4766]: I1213 03:55:09.731960 4766 patch_prober.go:28] interesting pod/machine-config-daemon-94w9l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 03:55:09 crc kubenswrapper[4766]: I1213 03:55:09.732724 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 03:55:09 crc kubenswrapper[4766]: I1213 03:55:09.732813 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" Dec 13 03:55:09 crc kubenswrapper[4766]: I1213 03:55:09.733630 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3581901a9e5d232fe2bee9f853467e0cc1f606b0f931756d9ef9ab621fab6fda"} pod="openshift-machine-config-operator/machine-config-daemon-94w9l" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 13 03:55:09 crc kubenswrapper[4766]: I1213 03:55:09.733698 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" containerID="cri-o://3581901a9e5d232fe2bee9f853467e0cc1f606b0f931756d9ef9ab621fab6fda" gracePeriod=600 Dec 13 03:55:10 crc kubenswrapper[4766]: I1213 03:55:10.384927 4766 generic.go:334] "Generic (PLEG): container finished" podID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerID="3581901a9e5d232fe2bee9f853467e0cc1f606b0f931756d9ef9ab621fab6fda" exitCode=0 Dec 13 03:55:10 crc kubenswrapper[4766]: I1213 03:55:10.384989 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" event={"ID":"71e6a48b-4f5d-4299-9c7b-98dbe11e670e","Type":"ContainerDied","Data":"3581901a9e5d232fe2bee9f853467e0cc1f606b0f931756d9ef9ab621fab6fda"} Dec 13 03:55:10 crc kubenswrapper[4766]: I1213 03:55:10.385497 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" event={"ID":"71e6a48b-4f5d-4299-9c7b-98dbe11e670e","Type":"ContainerStarted","Data":"f2151a06f5707b72e56cc0032442b9fe647442317230e16d90226a37ee92ba85"} Dec 13 03:55:10 crc kubenswrapper[4766]: I1213 03:55:10.385566 4766 scope.go:117] "RemoveContainer" containerID="ca44fbdb60b7c4e21f7f23576ab2a08072b8d79ddd151b8f9523f542ad2e0779" Dec 13 03:57:23 crc kubenswrapper[4766]: I1213 03:57:23.171597 4766 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.155958 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2dfkj"] Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.156870 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="ovn-controller" containerID="cri-o://4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8" gracePeriod=30 Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.156921 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="nbdb" containerID="cri-o://7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8" gracePeriod=30 Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.157049 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="sbdb" containerID="cri-o://b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1" gracePeriod=30 Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.157131 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="kube-rbac-proxy-node" containerID="cri-o://f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839" gracePeriod=30 Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.157167 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="northd" containerID="cri-o://091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd" gracePeriod=30 Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.157237 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910" gracePeriod=30 Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.157270 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="ovn-acl-logging" containerID="cri-o://a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428" gracePeriod=30 Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.196694 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="ovnkube-controller" containerID="cri-o://fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0" gracePeriod=30 Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.512580 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2dfkj_c2621562-4c91-40a3-ad72-29d325404496/ovnkube-controller/3.log" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.517339 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2dfkj_c2621562-4c91-40a3-ad72-29d325404496/ovn-acl-logging/0.log" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.522404 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2dfkj_c2621562-4c91-40a3-ad72-29d325404496/ovn-controller/0.log" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.523325 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.603564 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xdldq"] Dec 13 03:57:39 crc kubenswrapper[4766]: E1213 03:57:39.604447 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="ovnkube-controller" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.604491 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="ovnkube-controller" Dec 13 03:57:39 crc kubenswrapper[4766]: E1213 03:57:39.604505 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="ovnkube-controller" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.604512 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="ovnkube-controller" Dec 13 03:57:39 crc kubenswrapper[4766]: E1213 03:57:39.604555 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="kubecfg-setup" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.604566 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="kubecfg-setup" Dec 13 03:57:39 crc kubenswrapper[4766]: E1213 03:57:39.604575 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="ovnkube-controller" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.604582 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="ovnkube-controller" Dec 13 03:57:39 crc kubenswrapper[4766]: E1213 03:57:39.604591 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="nbdb" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.604597 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="nbdb" Dec 13 03:57:39 crc kubenswrapper[4766]: E1213 03:57:39.604606 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="kube-rbac-proxy-ovn-metrics" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.604612 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="kube-rbac-proxy-ovn-metrics" Dec 13 03:57:39 crc kubenswrapper[4766]: E1213 03:57:39.604622 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="northd" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.604628 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="northd" Dec 13 03:57:39 crc kubenswrapper[4766]: E1213 03:57:39.604636 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="sbdb" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.604642 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="sbdb" Dec 13 03:57:39 crc kubenswrapper[4766]: E1213 03:57:39.604652 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="ovn-acl-logging" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.604659 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="ovn-acl-logging" Dec 13 03:57:39 crc kubenswrapper[4766]: E1213 03:57:39.604676 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="ovn-controller" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.604683 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="ovn-controller" Dec 13 03:57:39 crc kubenswrapper[4766]: E1213 03:57:39.604695 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="342d355f-91c2-4c74-b72e-fa4164314fe1" containerName="registry" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.604703 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="342d355f-91c2-4c74-b72e-fa4164314fe1" containerName="registry" Dec 13 03:57:39 crc kubenswrapper[4766]: E1213 03:57:39.604714 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="kube-rbac-proxy-node" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.604722 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="kube-rbac-proxy-node" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.604863 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="342d355f-91c2-4c74-b72e-fa4164314fe1" containerName="registry" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.605043 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="ovn-controller" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.605071 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="kube-rbac-proxy-ovn-metrics" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.605087 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="ovnkube-controller" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.605098 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="northd" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.605108 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="ovnkube-controller" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.605122 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="ovnkube-controller" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.605132 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="ovn-acl-logging" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.605143 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="sbdb" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.605155 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="kube-rbac-proxy-node" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.605166 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="ovnkube-controller" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.605177 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="nbdb" Dec 13 03:57:39 crc kubenswrapper[4766]: E1213 03:57:39.605338 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="ovnkube-controller" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.605353 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="ovnkube-controller" Dec 13 03:57:39 crc kubenswrapper[4766]: E1213 03:57:39.605361 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="ovnkube-controller" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.605370 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="ovnkube-controller" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.605516 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2621562-4c91-40a3-ad72-29d325404496" containerName="ovnkube-controller" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.608078 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.633077 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-cni-bin\") pod \"c2621562-4c91-40a3-ad72-29d325404496\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.633136 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-systemd-units\") pod \"c2621562-4c91-40a3-ad72-29d325404496\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.633160 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-cni-netd\") pod \"c2621562-4c91-40a3-ad72-29d325404496\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.633176 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-var-lib-openvswitch\") pod \"c2621562-4c91-40a3-ad72-29d325404496\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.633196 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-run-ovn\") pod \"c2621562-4c91-40a3-ad72-29d325404496\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.633217 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-kubelet\") pod \"c2621562-4c91-40a3-ad72-29d325404496\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.633238 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c2621562-4c91-40a3-ad72-29d325404496-ovnkube-config\") pod \"c2621562-4c91-40a3-ad72-29d325404496\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.633280 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gcjs\" (UniqueName: \"kubernetes.io/projected/c2621562-4c91-40a3-ad72-29d325404496-kube-api-access-9gcjs\") pod \"c2621562-4c91-40a3-ad72-29d325404496\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.633314 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-node-log\") pod \"c2621562-4c91-40a3-ad72-29d325404496\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.633330 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-run-systemd\") pod \"c2621562-4c91-40a3-ad72-29d325404496\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.633345 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-run-openvswitch\") pod \"c2621562-4c91-40a3-ad72-29d325404496\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.633368 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-slash\") pod \"c2621562-4c91-40a3-ad72-29d325404496\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.633393 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-log-socket\") pod \"c2621562-4c91-40a3-ad72-29d325404496\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.633417 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c2621562-4c91-40a3-ad72-29d325404496-ovnkube-script-lib\") pod \"c2621562-4c91-40a3-ad72-29d325404496\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.633468 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-etc-openvswitch\") pod \"c2621562-4c91-40a3-ad72-29d325404496\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.633501 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-run-netns\") pod \"c2621562-4c91-40a3-ad72-29d325404496\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.633518 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c2621562-4c91-40a3-ad72-29d325404496-env-overrides\") pod \"c2621562-4c91-40a3-ad72-29d325404496\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.633544 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c2621562-4c91-40a3-ad72-29d325404496-ovn-node-metrics-cert\") pod \"c2621562-4c91-40a3-ad72-29d325404496\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.633571 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-var-lib-cni-networks-ovn-kubernetes\") pod \"c2621562-4c91-40a3-ad72-29d325404496\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.633588 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-run-ovn-kubernetes\") pod \"c2621562-4c91-40a3-ad72-29d325404496\" (UID: \"c2621562-4c91-40a3-ad72-29d325404496\") " Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.633740 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "c2621562-4c91-40a3-ad72-29d325404496" (UID: "c2621562-4c91-40a3-ad72-29d325404496"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.633796 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "c2621562-4c91-40a3-ad72-29d325404496" (UID: "c2621562-4c91-40a3-ad72-29d325404496"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.633833 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "c2621562-4c91-40a3-ad72-29d325404496" (UID: "c2621562-4c91-40a3-ad72-29d325404496"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.633884 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "c2621562-4c91-40a3-ad72-29d325404496" (UID: "c2621562-4c91-40a3-ad72-29d325404496"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.634076 4766 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.634098 4766 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.634157 4766 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.634171 4766 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.634207 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "c2621562-4c91-40a3-ad72-29d325404496" (UID: "c2621562-4c91-40a3-ad72-29d325404496"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.634234 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "c2621562-4c91-40a3-ad72-29d325404496" (UID: "c2621562-4c91-40a3-ad72-29d325404496"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.634261 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "c2621562-4c91-40a3-ad72-29d325404496" (UID: "c2621562-4c91-40a3-ad72-29d325404496"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.633740 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-node-log" (OuterVolumeSpecName: "node-log") pod "c2621562-4c91-40a3-ad72-29d325404496" (UID: "c2621562-4c91-40a3-ad72-29d325404496"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.634298 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "c2621562-4c91-40a3-ad72-29d325404496" (UID: "c2621562-4c91-40a3-ad72-29d325404496"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.634336 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-log-socket" (OuterVolumeSpecName: "log-socket") pod "c2621562-4c91-40a3-ad72-29d325404496" (UID: "c2621562-4c91-40a3-ad72-29d325404496"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.634352 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "c2621562-4c91-40a3-ad72-29d325404496" (UID: "c2621562-4c91-40a3-ad72-29d325404496"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.634395 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-slash" (OuterVolumeSpecName: "host-slash") pod "c2621562-4c91-40a3-ad72-29d325404496" (UID: "c2621562-4c91-40a3-ad72-29d325404496"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.634468 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "c2621562-4c91-40a3-ad72-29d325404496" (UID: "c2621562-4c91-40a3-ad72-29d325404496"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.634809 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2621562-4c91-40a3-ad72-29d325404496-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "c2621562-4c91-40a3-ad72-29d325404496" (UID: "c2621562-4c91-40a3-ad72-29d325404496"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.635134 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2621562-4c91-40a3-ad72-29d325404496-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "c2621562-4c91-40a3-ad72-29d325404496" (UID: "c2621562-4c91-40a3-ad72-29d325404496"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.635264 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "c2621562-4c91-40a3-ad72-29d325404496" (UID: "c2621562-4c91-40a3-ad72-29d325404496"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.635566 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2621562-4c91-40a3-ad72-29d325404496-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "c2621562-4c91-40a3-ad72-29d325404496" (UID: "c2621562-4c91-40a3-ad72-29d325404496"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.641377 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2621562-4c91-40a3-ad72-29d325404496-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "c2621562-4c91-40a3-ad72-29d325404496" (UID: "c2621562-4c91-40a3-ad72-29d325404496"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.647694 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2621562-4c91-40a3-ad72-29d325404496-kube-api-access-9gcjs" (OuterVolumeSpecName: "kube-api-access-9gcjs") pod "c2621562-4c91-40a3-ad72-29d325404496" (UID: "c2621562-4c91-40a3-ad72-29d325404496"). InnerVolumeSpecName "kube-api-access-9gcjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.653895 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "c2621562-4c91-40a3-ad72-29d325404496" (UID: "c2621562-4c91-40a3-ad72-29d325404496"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.698130 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2dfkj_c2621562-4c91-40a3-ad72-29d325404496/ovnkube-controller/3.log" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.701010 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2dfkj_c2621562-4c91-40a3-ad72-29d325404496/ovn-acl-logging/0.log" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.701593 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2dfkj_c2621562-4c91-40a3-ad72-29d325404496/ovn-controller/0.log" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702085 4766 generic.go:334] "Generic (PLEG): container finished" podID="c2621562-4c91-40a3-ad72-29d325404496" containerID="fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0" exitCode=0 Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702119 4766 generic.go:334] "Generic (PLEG): container finished" podID="c2621562-4c91-40a3-ad72-29d325404496" containerID="b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1" exitCode=0 Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702129 4766 generic.go:334] "Generic (PLEG): container finished" podID="c2621562-4c91-40a3-ad72-29d325404496" containerID="7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8" exitCode=0 Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702138 4766 generic.go:334] "Generic (PLEG): container finished" podID="c2621562-4c91-40a3-ad72-29d325404496" containerID="091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd" exitCode=0 Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702146 4766 generic.go:334] "Generic (PLEG): container finished" podID="c2621562-4c91-40a3-ad72-29d325404496" containerID="bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910" exitCode=0 Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702154 4766 generic.go:334] "Generic (PLEG): container finished" podID="c2621562-4c91-40a3-ad72-29d325404496" containerID="f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839" exitCode=0 Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702160 4766 generic.go:334] "Generic (PLEG): container finished" podID="c2621562-4c91-40a3-ad72-29d325404496" containerID="a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428" exitCode=143 Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702167 4766 generic.go:334] "Generic (PLEG): container finished" podID="c2621562-4c91-40a3-ad72-29d325404496" containerID="4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8" exitCode=143 Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702206 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" event={"ID":"c2621562-4c91-40a3-ad72-29d325404496","Type":"ContainerDied","Data":"fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702237 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" event={"ID":"c2621562-4c91-40a3-ad72-29d325404496","Type":"ContainerDied","Data":"b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702249 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" event={"ID":"c2621562-4c91-40a3-ad72-29d325404496","Type":"ContainerDied","Data":"7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702259 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" event={"ID":"c2621562-4c91-40a3-ad72-29d325404496","Type":"ContainerDied","Data":"091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702279 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" event={"ID":"c2621562-4c91-40a3-ad72-29d325404496","Type":"ContainerDied","Data":"bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702289 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" event={"ID":"c2621562-4c91-40a3-ad72-29d325404496","Type":"ContainerDied","Data":"f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702312 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702336 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702348 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702355 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702363 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702370 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702381 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702386 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702392 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702401 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" event={"ID":"c2621562-4c91-40a3-ad72-29d325404496","Type":"ContainerDied","Data":"a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702410 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702417 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702456 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702464 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702469 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702475 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702481 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702494 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702500 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702506 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702516 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" event={"ID":"c2621562-4c91-40a3-ad72-29d325404496","Type":"ContainerDied","Data":"4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702526 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702538 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702543 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702549 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702555 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702560 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702575 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702581 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702587 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702592 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702600 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" event={"ID":"c2621562-4c91-40a3-ad72-29d325404496","Type":"ContainerDied","Data":"984bccf61d56ed05945c6a552659c35dc72121c452716b8fe773aa569aa9aa69"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702609 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702615 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702622 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702634 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702640 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702646 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702652 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702659 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702664 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702670 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702670 4766 scope.go:117] "RemoveContainer" containerID="fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.702717 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2dfkj" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.705986 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6n4vc_b724d1e1-9ded-434e-b852-f5233f27ef32/kube-multus/2.log" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.707384 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6n4vc_b724d1e1-9ded-434e-b852-f5233f27ef32/kube-multus/1.log" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.707503 4766 generic.go:334] "Generic (PLEG): container finished" podID="b724d1e1-9ded-434e-b852-f5233f27ef32" containerID="934184c50f0ad116964e4b6847b2598415944b7ff00877ec16ecaeb4f28a00b1" exitCode=2 Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.707562 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6n4vc" event={"ID":"b724d1e1-9ded-434e-b852-f5233f27ef32","Type":"ContainerDied","Data":"934184c50f0ad116964e4b6847b2598415944b7ff00877ec16ecaeb4f28a00b1"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.707620 4766 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40ada8692e77f3fdcfb7af0831f398beba03791a2e7286360296b007c809a329"} Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.708576 4766 scope.go:117] "RemoveContainer" containerID="934184c50f0ad116964e4b6847b2598415944b7ff00877ec16ecaeb4f28a00b1" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.734416 4766 patch_prober.go:28] interesting pod/machine-config-daemon-94w9l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.734572 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.739110 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-run-systemd\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.739305 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-log-socket\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.739394 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-var-lib-openvswitch\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.739488 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-host-slash\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.739711 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-run-ovn\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.739813 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/553e05a3-7464-411c-9632-5f3e49d41b36-ovnkube-config\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.739850 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-run-openvswitch\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.739923 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/553e05a3-7464-411c-9632-5f3e49d41b36-ovnkube-script-lib\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.740059 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-host-kubelet\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.740098 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/553e05a3-7464-411c-9632-5f3e49d41b36-env-overrides\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.740201 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/553e05a3-7464-411c-9632-5f3e49d41b36-ovn-node-metrics-cert\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.740288 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-node-log\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.740318 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-host-cni-netd\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.740360 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-etc-openvswitch\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.740385 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-host-cni-bin\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.740442 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.740514 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-host-run-ovn-kubernetes\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.740547 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-host-run-netns\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.740597 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-systemd-units\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.740619 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnv4g\" (UniqueName: \"kubernetes.io/projected/553e05a3-7464-411c-9632-5f3e49d41b36-kube-api-access-qnv4g\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.740687 4766 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.740703 4766 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.740713 4766 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.740723 4766 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c2621562-4c91-40a3-ad72-29d325404496-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.740738 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gcjs\" (UniqueName: \"kubernetes.io/projected/c2621562-4c91-40a3-ad72-29d325404496-kube-api-access-9gcjs\") on node \"crc\" DevicePath \"\"" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.740751 4766 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-node-log\") on node \"crc\" DevicePath \"\"" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.740761 4766 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.740774 4766 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.740786 4766 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-slash\") on node \"crc\" DevicePath \"\"" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.740798 4766 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-log-socket\") on node \"crc\" DevicePath \"\"" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.740811 4766 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c2621562-4c91-40a3-ad72-29d325404496-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.740821 4766 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.740832 4766 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.740841 4766 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c2621562-4c91-40a3-ad72-29d325404496-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.740852 4766 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c2621562-4c91-40a3-ad72-29d325404496-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.740866 4766 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c2621562-4c91-40a3-ad72-29d325404496-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.756509 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2dfkj"] Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.758047 4766 scope.go:117] "RemoveContainer" containerID="aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.760178 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2dfkj"] Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.788190 4766 scope.go:117] "RemoveContainer" containerID="b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.805311 4766 scope.go:117] "RemoveContainer" containerID="7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.836602 4766 scope.go:117] "RemoveContainer" containerID="091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.842495 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-run-openvswitch\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.842556 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/553e05a3-7464-411c-9632-5f3e49d41b36-ovnkube-script-lib\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.842594 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-run-openvswitch\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.842625 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-host-kubelet\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.842598 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-host-kubelet\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.842687 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/553e05a3-7464-411c-9632-5f3e49d41b36-env-overrides\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.842735 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/553e05a3-7464-411c-9632-5f3e49d41b36-ovn-node-metrics-cert\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.842769 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-node-log\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.842795 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-host-cni-netd\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.842829 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-etc-openvswitch\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.843250 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-host-cni-bin\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.843284 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.843092 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-host-cni-netd\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.843349 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-host-run-ovn-kubernetes\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.843103 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-etc-openvswitch\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.843372 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.843064 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-node-log\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.843385 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-host-cni-bin\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.843319 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-host-run-ovn-kubernetes\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.843505 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-host-run-netns\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.843541 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-systemd-units\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.843563 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnv4g\" (UniqueName: \"kubernetes.io/projected/553e05a3-7464-411c-9632-5f3e49d41b36-kube-api-access-qnv4g\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.843653 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-run-systemd\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.843684 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-log-socket\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.843740 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-var-lib-openvswitch\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.843758 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-host-slash\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.843766 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/553e05a3-7464-411c-9632-5f3e49d41b36-env-overrides\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.843809 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-run-ovn\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.843786 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-run-ovn\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.843838 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-log-socket\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.843869 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/553e05a3-7464-411c-9632-5f3e49d41b36-ovnkube-config\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.843967 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-run-systemd\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.844030 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-var-lib-openvswitch\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.844059 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-host-run-netns\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.844095 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-host-slash\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.844161 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/553e05a3-7464-411c-9632-5f3e49d41b36-systemd-units\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.845634 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/553e05a3-7464-411c-9632-5f3e49d41b36-ovnkube-config\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.845716 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/553e05a3-7464-411c-9632-5f3e49d41b36-ovnkube-script-lib\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.847848 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/553e05a3-7464-411c-9632-5f3e49d41b36-ovn-node-metrics-cert\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.858345 4766 scope.go:117] "RemoveContainer" containerID="bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.864168 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnv4g\" (UniqueName: \"kubernetes.io/projected/553e05a3-7464-411c-9632-5f3e49d41b36-kube-api-access-qnv4g\") pod \"ovnkube-node-xdldq\" (UID: \"553e05a3-7464-411c-9632-5f3e49d41b36\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.878920 4766 scope.go:117] "RemoveContainer" containerID="f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.898292 4766 scope.go:117] "RemoveContainer" containerID="a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.913506 4766 scope.go:117] "RemoveContainer" containerID="4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.935251 4766 scope.go:117] "RemoveContainer" containerID="f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.935866 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.958714 4766 scope.go:117] "RemoveContainer" containerID="fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0" Dec 13 03:57:39 crc kubenswrapper[4766]: E1213 03:57:39.959810 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0\": container with ID starting with fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0 not found: ID does not exist" containerID="fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.959876 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0"} err="failed to get container status \"fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0\": rpc error: code = NotFound desc = could not find container \"fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0\": container with ID starting with fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0 not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.959947 4766 scope.go:117] "RemoveContainer" containerID="aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8" Dec 13 03:57:39 crc kubenswrapper[4766]: E1213 03:57:39.960557 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8\": container with ID starting with aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8 not found: ID does not exist" containerID="aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.960644 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8"} err="failed to get container status \"aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8\": rpc error: code = NotFound desc = could not find container \"aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8\": container with ID starting with aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8 not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.960696 4766 scope.go:117] "RemoveContainer" containerID="b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1" Dec 13 03:57:39 crc kubenswrapper[4766]: E1213 03:57:39.961168 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\": container with ID starting with b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1 not found: ID does not exist" containerID="b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.961205 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1"} err="failed to get container status \"b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\": rpc error: code = NotFound desc = could not find container \"b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\": container with ID starting with b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1 not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.961228 4766 scope.go:117] "RemoveContainer" containerID="7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8" Dec 13 03:57:39 crc kubenswrapper[4766]: E1213 03:57:39.962076 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\": container with ID starting with 7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8 not found: ID does not exist" containerID="7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.962138 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8"} err="failed to get container status \"7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\": rpc error: code = NotFound desc = could not find container \"7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\": container with ID starting with 7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8 not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.962176 4766 scope.go:117] "RemoveContainer" containerID="091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd" Dec 13 03:57:39 crc kubenswrapper[4766]: E1213 03:57:39.962719 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\": container with ID starting with 091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd not found: ID does not exist" containerID="091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.962770 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd"} err="failed to get container status \"091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\": rpc error: code = NotFound desc = could not find container \"091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\": container with ID starting with 091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.962800 4766 scope.go:117] "RemoveContainer" containerID="bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910" Dec 13 03:57:39 crc kubenswrapper[4766]: E1213 03:57:39.963288 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\": container with ID starting with bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910 not found: ID does not exist" containerID="bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.963384 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910"} err="failed to get container status \"bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\": rpc error: code = NotFound desc = could not find container \"bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\": container with ID starting with bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910 not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.963487 4766 scope.go:117] "RemoveContainer" containerID="f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839" Dec 13 03:57:39 crc kubenswrapper[4766]: E1213 03:57:39.963994 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\": container with ID starting with f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839 not found: ID does not exist" containerID="f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.964028 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839"} err="failed to get container status \"f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\": rpc error: code = NotFound desc = could not find container \"f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\": container with ID starting with f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839 not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.964047 4766 scope.go:117] "RemoveContainer" containerID="a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428" Dec 13 03:57:39 crc kubenswrapper[4766]: E1213 03:57:39.964296 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\": container with ID starting with a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428 not found: ID does not exist" containerID="a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.964332 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428"} err="failed to get container status \"a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\": rpc error: code = NotFound desc = could not find container \"a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\": container with ID starting with a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428 not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.964355 4766 scope.go:117] "RemoveContainer" containerID="4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8" Dec 13 03:57:39 crc kubenswrapper[4766]: E1213 03:57:39.964682 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\": container with ID starting with 4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8 not found: ID does not exist" containerID="4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.964718 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8"} err="failed to get container status \"4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\": rpc error: code = NotFound desc = could not find container \"4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\": container with ID starting with 4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8 not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.964765 4766 scope.go:117] "RemoveContainer" containerID="f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d" Dec 13 03:57:39 crc kubenswrapper[4766]: E1213 03:57:39.965184 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\": container with ID starting with f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d not found: ID does not exist" containerID="f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.965227 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d"} err="failed to get container status \"f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\": rpc error: code = NotFound desc = could not find container \"f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\": container with ID starting with f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.965250 4766 scope.go:117] "RemoveContainer" containerID="fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.965599 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0"} err="failed to get container status \"fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0\": rpc error: code = NotFound desc = could not find container \"fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0\": container with ID starting with fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0 not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.965663 4766 scope.go:117] "RemoveContainer" containerID="aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.966112 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8"} err="failed to get container status \"aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8\": rpc error: code = NotFound desc = could not find container \"aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8\": container with ID starting with aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8 not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.966179 4766 scope.go:117] "RemoveContainer" containerID="b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.966511 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1"} err="failed to get container status \"b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\": rpc error: code = NotFound desc = could not find container \"b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\": container with ID starting with b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1 not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.966545 4766 scope.go:117] "RemoveContainer" containerID="7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.966810 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8"} err="failed to get container status \"7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\": rpc error: code = NotFound desc = could not find container \"7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\": container with ID starting with 7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8 not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.966846 4766 scope.go:117] "RemoveContainer" containerID="091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.967093 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd"} err="failed to get container status \"091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\": rpc error: code = NotFound desc = could not find container \"091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\": container with ID starting with 091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.967121 4766 scope.go:117] "RemoveContainer" containerID="bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.967371 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910"} err="failed to get container status \"bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\": rpc error: code = NotFound desc = could not find container \"bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\": container with ID starting with bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910 not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.967393 4766 scope.go:117] "RemoveContainer" containerID="f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.967621 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839"} err="failed to get container status \"f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\": rpc error: code = NotFound desc = could not find container \"f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\": container with ID starting with f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839 not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.967652 4766 scope.go:117] "RemoveContainer" containerID="a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.967924 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428"} err="failed to get container status \"a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\": rpc error: code = NotFound desc = could not find container \"a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\": container with ID starting with a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428 not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.968030 4766 scope.go:117] "RemoveContainer" containerID="4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.968284 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8"} err="failed to get container status \"4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\": rpc error: code = NotFound desc = could not find container \"4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\": container with ID starting with 4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8 not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.968312 4766 scope.go:117] "RemoveContainer" containerID="f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.968580 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d"} err="failed to get container status \"f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\": rpc error: code = NotFound desc = could not find container \"f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\": container with ID starting with f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.968606 4766 scope.go:117] "RemoveContainer" containerID="fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.968824 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0"} err="failed to get container status \"fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0\": rpc error: code = NotFound desc = could not find container \"fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0\": container with ID starting with fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0 not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.968861 4766 scope.go:117] "RemoveContainer" containerID="aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.969269 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8"} err="failed to get container status \"aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8\": rpc error: code = NotFound desc = could not find container \"aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8\": container with ID starting with aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8 not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.969300 4766 scope.go:117] "RemoveContainer" containerID="b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.969658 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1"} err="failed to get container status \"b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\": rpc error: code = NotFound desc = could not find container \"b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\": container with ID starting with b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1 not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.969695 4766 scope.go:117] "RemoveContainer" containerID="7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8" Dec 13 03:57:39 crc kubenswrapper[4766]: W1213 03:57:39.969636 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod553e05a3_7464_411c_9632_5f3e49d41b36.slice/crio-b9a6881f75f2b56421d2ccd0cade0312dff2c8c3a4ab7be1611044fc8eda599a WatchSource:0}: Error finding container b9a6881f75f2b56421d2ccd0cade0312dff2c8c3a4ab7be1611044fc8eda599a: Status 404 returned error can't find the container with id b9a6881f75f2b56421d2ccd0cade0312dff2c8c3a4ab7be1611044fc8eda599a Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.970058 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8"} err="failed to get container status \"7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\": rpc error: code = NotFound desc = could not find container \"7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\": container with ID starting with 7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8 not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.970079 4766 scope.go:117] "RemoveContainer" containerID="091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.972704 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd"} err="failed to get container status \"091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\": rpc error: code = NotFound desc = could not find container \"091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\": container with ID starting with 091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.972749 4766 scope.go:117] "RemoveContainer" containerID="bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.973093 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910"} err="failed to get container status \"bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\": rpc error: code = NotFound desc = could not find container \"bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\": container with ID starting with bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910 not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.973149 4766 scope.go:117] "RemoveContainer" containerID="f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.973463 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839"} err="failed to get container status \"f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\": rpc error: code = NotFound desc = could not find container \"f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\": container with ID starting with f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839 not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.973494 4766 scope.go:117] "RemoveContainer" containerID="a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.973851 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428"} err="failed to get container status \"a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\": rpc error: code = NotFound desc = could not find container \"a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\": container with ID starting with a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428 not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.973913 4766 scope.go:117] "RemoveContainer" containerID="4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.976553 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8"} err="failed to get container status \"4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\": rpc error: code = NotFound desc = could not find container \"4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\": container with ID starting with 4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8 not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.976681 4766 scope.go:117] "RemoveContainer" containerID="f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.977549 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d"} err="failed to get container status \"f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\": rpc error: code = NotFound desc = could not find container \"f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\": container with ID starting with f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.977588 4766 scope.go:117] "RemoveContainer" containerID="fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.978548 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0"} err="failed to get container status \"fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0\": rpc error: code = NotFound desc = could not find container \"fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0\": container with ID starting with fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0 not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.978588 4766 scope.go:117] "RemoveContainer" containerID="aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.979151 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8"} err="failed to get container status \"aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8\": rpc error: code = NotFound desc = could not find container \"aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8\": container with ID starting with aa0685e32ac5b0dd799375281224af74e2485ce6fae0ed9c9cd56ac3f648f3a8 not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.979180 4766 scope.go:117] "RemoveContainer" containerID="b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.979470 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1"} err="failed to get container status \"b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\": rpc error: code = NotFound desc = could not find container \"b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1\": container with ID starting with b23b7df616eae980ad39518abaf18567e3ef9bd689778887d364974cacf45af1 not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.979493 4766 scope.go:117] "RemoveContainer" containerID="7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.979765 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8"} err="failed to get container status \"7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\": rpc error: code = NotFound desc = could not find container \"7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8\": container with ID starting with 7e3fd51b5f0edda1d9aba888976037fe0abe8eeb49f1f15e9d39b4eefde0f6d8 not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.979787 4766 scope.go:117] "RemoveContainer" containerID="091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.980188 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd"} err="failed to get container status \"091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\": rpc error: code = NotFound desc = could not find container \"091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd\": container with ID starting with 091d81286b07f984f7e9c2077da1b1380a2a3bb93c4febdeaaa15015aecc7fcd not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.980209 4766 scope.go:117] "RemoveContainer" containerID="bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.980738 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910"} err="failed to get container status \"bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\": rpc error: code = NotFound desc = could not find container \"bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910\": container with ID starting with bd0a72ceeb43ada8e64a460f851658626b5e825f877b148019ba4180a2723910 not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.980761 4766 scope.go:117] "RemoveContainer" containerID="f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.981040 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839"} err="failed to get container status \"f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\": rpc error: code = NotFound desc = could not find container \"f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839\": container with ID starting with f85ee136112bdd797229463e615c8d7c68fae42d324d24aa54e742fb3a644839 not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.981059 4766 scope.go:117] "RemoveContainer" containerID="a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.981354 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428"} err="failed to get container status \"a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\": rpc error: code = NotFound desc = could not find container \"a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428\": container with ID starting with a6b6768063a60ce298339783b4a78fe5b7c7531d34be85392c28ca146edf2428 not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.981392 4766 scope.go:117] "RemoveContainer" containerID="4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.981661 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8"} err="failed to get container status \"4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\": rpc error: code = NotFound desc = could not find container \"4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8\": container with ID starting with 4a7f6b5dd12b220905aa6108732712b891a334d2566fe61bdf15e320770901b8 not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.981689 4766 scope.go:117] "RemoveContainer" containerID="f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.982068 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d"} err="failed to get container status \"f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\": rpc error: code = NotFound desc = could not find container \"f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d\": container with ID starting with f4ad4677dee2d30d8492bc3ff64d965af142b68da2218597a9f4a646405ba25d not found: ID does not exist" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.982094 4766 scope.go:117] "RemoveContainer" containerID="fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0" Dec 13 03:57:39 crc kubenswrapper[4766]: I1213 03:57:39.982464 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0"} err="failed to get container status \"fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0\": rpc error: code = NotFound desc = could not find container \"fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0\": container with ID starting with fecf8dc2ba8baa5eef62c91c272dc84a46dc41000155767782da4cd2b2462bc0 not found: ID does not exist" Dec 13 03:57:40 crc kubenswrapper[4766]: I1213 03:57:40.726194 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6n4vc_b724d1e1-9ded-434e-b852-f5233f27ef32/kube-multus/2.log" Dec 13 03:57:40 crc kubenswrapper[4766]: I1213 03:57:40.729780 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6n4vc_b724d1e1-9ded-434e-b852-f5233f27ef32/kube-multus/1.log" Dec 13 03:57:40 crc kubenswrapper[4766]: I1213 03:57:40.729906 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6n4vc" event={"ID":"b724d1e1-9ded-434e-b852-f5233f27ef32","Type":"ContainerStarted","Data":"34ca93a1474bf4befdbda5d8f386c163a3c25843b9f7f851088f43517e9d4783"} Dec 13 03:57:40 crc kubenswrapper[4766]: I1213 03:57:40.732846 4766 generic.go:334] "Generic (PLEG): container finished" podID="553e05a3-7464-411c-9632-5f3e49d41b36" containerID="7adbbd46f11fd34bde04fd64e7ccbaa57983d1c419b4a247c9c81d3019ce816b" exitCode=0 Dec 13 03:57:40 crc kubenswrapper[4766]: I1213 03:57:40.732892 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" event={"ID":"553e05a3-7464-411c-9632-5f3e49d41b36","Type":"ContainerDied","Data":"7adbbd46f11fd34bde04fd64e7ccbaa57983d1c419b4a247c9c81d3019ce816b"} Dec 13 03:57:40 crc kubenswrapper[4766]: I1213 03:57:40.732919 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" event={"ID":"553e05a3-7464-411c-9632-5f3e49d41b36","Type":"ContainerStarted","Data":"b9a6881f75f2b56421d2ccd0cade0312dff2c8c3a4ab7be1611044fc8eda599a"} Dec 13 03:57:41 crc kubenswrapper[4766]: I1213 03:57:41.637790 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2621562-4c91-40a3-ad72-29d325404496" path="/var/lib/kubelet/pods/c2621562-4c91-40a3-ad72-29d325404496/volumes" Dec 13 03:57:41 crc kubenswrapper[4766]: I1213 03:57:41.767833 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" event={"ID":"553e05a3-7464-411c-9632-5f3e49d41b36","Type":"ContainerStarted","Data":"467737e4fa4fb3adfea07d8ba8deae12332060838dd11988f2358e80ccdcfa48"} Dec 13 03:57:41 crc kubenswrapper[4766]: I1213 03:57:41.767929 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" event={"ID":"553e05a3-7464-411c-9632-5f3e49d41b36","Type":"ContainerStarted","Data":"208f0269d30ad50fc3ea83a85af518bcf4c1ea9949a3499a8a3270272da6d2a1"} Dec 13 03:57:41 crc kubenswrapper[4766]: I1213 03:57:41.767953 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" event={"ID":"553e05a3-7464-411c-9632-5f3e49d41b36","Type":"ContainerStarted","Data":"1db08c3347103f881d9be9ebffd7d4b2013d7c974f78462cf854dc8dafb56c8a"} Dec 13 03:57:41 crc kubenswrapper[4766]: I1213 03:57:41.767973 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" event={"ID":"553e05a3-7464-411c-9632-5f3e49d41b36","Type":"ContainerStarted","Data":"6d5952dc01f02bd28e0496e73c0d966aa833518bb556d4a18d870d10f9ea7b6a"} Dec 13 03:57:41 crc kubenswrapper[4766]: I1213 03:57:41.767993 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" event={"ID":"553e05a3-7464-411c-9632-5f3e49d41b36","Type":"ContainerStarted","Data":"e5e6230eeecb04557cbde7279dd234b0501ede79168e88613e57d95db794c97b"} Dec 13 03:57:41 crc kubenswrapper[4766]: I1213 03:57:41.768013 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" event={"ID":"553e05a3-7464-411c-9632-5f3e49d41b36","Type":"ContainerStarted","Data":"e67820136f6cca6c63ad872c90b1ebf4376ef64f4d1313cbb876d747e1f0647f"} Dec 13 03:57:45 crc kubenswrapper[4766]: I1213 03:57:45.097765 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" event={"ID":"553e05a3-7464-411c-9632-5f3e49d41b36","Type":"ContainerStarted","Data":"333a22d812c6b7b58d933c5cdd56047339d3cb3be25599c202f10b3aae2876a7"} Dec 13 03:57:48 crc kubenswrapper[4766]: I1213 03:57:48.125031 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" event={"ID":"553e05a3-7464-411c-9632-5f3e49d41b36","Type":"ContainerStarted","Data":"ee085243c69146842d29cedf570ceae2ce30b38cd76a314eda63863031a88da7"} Dec 13 03:57:48 crc kubenswrapper[4766]: I1213 03:57:48.126574 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:48 crc kubenswrapper[4766]: I1213 03:57:48.126608 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:48 crc kubenswrapper[4766]: I1213 03:57:48.235244 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" podStartSLOduration=9.235210315 podStartE2EDuration="9.235210315s" podCreationTimestamp="2025-12-13 03:57:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 03:57:48.230250243 +0000 UTC m=+799.740183227" watchObservedRunningTime="2025-12-13 03:57:48.235210315 +0000 UTC m=+799.745143279" Dec 13 03:57:48 crc kubenswrapper[4766]: I1213 03:57:48.241905 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:49 crc kubenswrapper[4766]: I1213 03:57:49.132126 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:57:49 crc kubenswrapper[4766]: I1213 03:57:49.168010 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:58:06 crc kubenswrapper[4766]: I1213 03:58:06.740728 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs"] Dec 13 03:58:06 crc kubenswrapper[4766]: I1213 03:58:06.744559 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs" Dec 13 03:58:06 crc kubenswrapper[4766]: I1213 03:58:06.747879 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 13 03:58:06 crc kubenswrapper[4766]: I1213 03:58:06.750732 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs"] Dec 13 03:58:06 crc kubenswrapper[4766]: I1213 03:58:06.920075 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd2797cc-1780-4828-b835-7bde5a0de2c4-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs\" (UID: \"dd2797cc-1780-4828-b835-7bde5a0de2c4\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs" Dec 13 03:58:06 crc kubenswrapper[4766]: I1213 03:58:06.920205 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lccqf\" (UniqueName: \"kubernetes.io/projected/dd2797cc-1780-4828-b835-7bde5a0de2c4-kube-api-access-lccqf\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs\" (UID: \"dd2797cc-1780-4828-b835-7bde5a0de2c4\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs" Dec 13 03:58:06 crc kubenswrapper[4766]: I1213 03:58:06.920504 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd2797cc-1780-4828-b835-7bde5a0de2c4-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs\" (UID: \"dd2797cc-1780-4828-b835-7bde5a0de2c4\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs" Dec 13 03:58:07 crc kubenswrapper[4766]: I1213 03:58:07.022193 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd2797cc-1780-4828-b835-7bde5a0de2c4-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs\" (UID: \"dd2797cc-1780-4828-b835-7bde5a0de2c4\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs" Dec 13 03:58:07 crc kubenswrapper[4766]: I1213 03:58:07.022780 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lccqf\" (UniqueName: \"kubernetes.io/projected/dd2797cc-1780-4828-b835-7bde5a0de2c4-kube-api-access-lccqf\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs\" (UID: \"dd2797cc-1780-4828-b835-7bde5a0de2c4\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs" Dec 13 03:58:07 crc kubenswrapper[4766]: I1213 03:58:07.022983 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd2797cc-1780-4828-b835-7bde5a0de2c4-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs\" (UID: \"dd2797cc-1780-4828-b835-7bde5a0de2c4\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs" Dec 13 03:58:07 crc kubenswrapper[4766]: I1213 03:58:07.023419 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd2797cc-1780-4828-b835-7bde5a0de2c4-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs\" (UID: \"dd2797cc-1780-4828-b835-7bde5a0de2c4\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs" Dec 13 03:58:07 crc kubenswrapper[4766]: I1213 03:58:07.023558 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd2797cc-1780-4828-b835-7bde5a0de2c4-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs\" (UID: \"dd2797cc-1780-4828-b835-7bde5a0de2c4\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs" Dec 13 03:58:07 crc kubenswrapper[4766]: I1213 03:58:07.046107 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lccqf\" (UniqueName: \"kubernetes.io/projected/dd2797cc-1780-4828-b835-7bde5a0de2c4-kube-api-access-lccqf\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs\" (UID: \"dd2797cc-1780-4828-b835-7bde5a0de2c4\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs" Dec 13 03:58:07 crc kubenswrapper[4766]: I1213 03:58:07.114675 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs" Dec 13 03:58:07 crc kubenswrapper[4766]: I1213 03:58:07.341140 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs"] Dec 13 03:58:07 crc kubenswrapper[4766]: W1213 03:58:07.349472 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd2797cc_1780_4828_b835_7bde5a0de2c4.slice/crio-cc18c45627d49c3414ad09da82d26f727241c9c33ec49a1ad5be128fcc95dd3b WatchSource:0}: Error finding container cc18c45627d49c3414ad09da82d26f727241c9c33ec49a1ad5be128fcc95dd3b: Status 404 returned error can't find the container with id cc18c45627d49c3414ad09da82d26f727241c9c33ec49a1ad5be128fcc95dd3b Dec 13 03:58:08 crc kubenswrapper[4766]: I1213 03:58:08.286285 4766 generic.go:334] "Generic (PLEG): container finished" podID="dd2797cc-1780-4828-b835-7bde5a0de2c4" containerID="a664314ae2ec136ee7f4105fe0a4edaf2150e622d4cc2df545c6652190ed1d44" exitCode=0 Dec 13 03:58:08 crc kubenswrapper[4766]: I1213 03:58:08.286359 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs" event={"ID":"dd2797cc-1780-4828-b835-7bde5a0de2c4","Type":"ContainerDied","Data":"a664314ae2ec136ee7f4105fe0a4edaf2150e622d4cc2df545c6652190ed1d44"} Dec 13 03:58:08 crc kubenswrapper[4766]: I1213 03:58:08.286671 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs" event={"ID":"dd2797cc-1780-4828-b835-7bde5a0de2c4","Type":"ContainerStarted","Data":"cc18c45627d49c3414ad09da82d26f727241c9c33ec49a1ad5be128fcc95dd3b"} Dec 13 03:58:08 crc kubenswrapper[4766]: I1213 03:58:08.290066 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 13 03:58:09 crc kubenswrapper[4766]: I1213 03:58:09.063842 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cbdpw"] Dec 13 03:58:09 crc kubenswrapper[4766]: I1213 03:58:09.065228 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbdpw" Dec 13 03:58:09 crc kubenswrapper[4766]: I1213 03:58:09.079777 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cbdpw"] Dec 13 03:58:09 crc kubenswrapper[4766]: I1213 03:58:09.263604 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn47q\" (UniqueName: \"kubernetes.io/projected/7309ad5b-bf11-4e31-981f-d1608fd6364d-kube-api-access-nn47q\") pod \"redhat-operators-cbdpw\" (UID: \"7309ad5b-bf11-4e31-981f-d1608fd6364d\") " pod="openshift-marketplace/redhat-operators-cbdpw" Dec 13 03:58:09 crc kubenswrapper[4766]: I1213 03:58:09.263687 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7309ad5b-bf11-4e31-981f-d1608fd6364d-utilities\") pod \"redhat-operators-cbdpw\" (UID: \"7309ad5b-bf11-4e31-981f-d1608fd6364d\") " pod="openshift-marketplace/redhat-operators-cbdpw" Dec 13 03:58:09 crc kubenswrapper[4766]: I1213 03:58:09.263735 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7309ad5b-bf11-4e31-981f-d1608fd6364d-catalog-content\") pod \"redhat-operators-cbdpw\" (UID: \"7309ad5b-bf11-4e31-981f-d1608fd6364d\") " pod="openshift-marketplace/redhat-operators-cbdpw" Dec 13 03:58:09 crc kubenswrapper[4766]: I1213 03:58:09.364839 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn47q\" (UniqueName: \"kubernetes.io/projected/7309ad5b-bf11-4e31-981f-d1608fd6364d-kube-api-access-nn47q\") pod \"redhat-operators-cbdpw\" (UID: \"7309ad5b-bf11-4e31-981f-d1608fd6364d\") " pod="openshift-marketplace/redhat-operators-cbdpw" Dec 13 03:58:09 crc kubenswrapper[4766]: I1213 03:58:09.364938 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7309ad5b-bf11-4e31-981f-d1608fd6364d-utilities\") pod \"redhat-operators-cbdpw\" (UID: \"7309ad5b-bf11-4e31-981f-d1608fd6364d\") " pod="openshift-marketplace/redhat-operators-cbdpw" Dec 13 03:58:09 crc kubenswrapper[4766]: I1213 03:58:09.365002 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7309ad5b-bf11-4e31-981f-d1608fd6364d-catalog-content\") pod \"redhat-operators-cbdpw\" (UID: \"7309ad5b-bf11-4e31-981f-d1608fd6364d\") " pod="openshift-marketplace/redhat-operators-cbdpw" Dec 13 03:58:09 crc kubenswrapper[4766]: I1213 03:58:09.366135 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7309ad5b-bf11-4e31-981f-d1608fd6364d-utilities\") pod \"redhat-operators-cbdpw\" (UID: \"7309ad5b-bf11-4e31-981f-d1608fd6364d\") " pod="openshift-marketplace/redhat-operators-cbdpw" Dec 13 03:58:09 crc kubenswrapper[4766]: I1213 03:58:09.366196 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7309ad5b-bf11-4e31-981f-d1608fd6364d-catalog-content\") pod \"redhat-operators-cbdpw\" (UID: \"7309ad5b-bf11-4e31-981f-d1608fd6364d\") " pod="openshift-marketplace/redhat-operators-cbdpw" Dec 13 03:58:09 crc kubenswrapper[4766]: I1213 03:58:09.389923 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn47q\" (UniqueName: \"kubernetes.io/projected/7309ad5b-bf11-4e31-981f-d1608fd6364d-kube-api-access-nn47q\") pod \"redhat-operators-cbdpw\" (UID: \"7309ad5b-bf11-4e31-981f-d1608fd6364d\") " pod="openshift-marketplace/redhat-operators-cbdpw" Dec 13 03:58:09 crc kubenswrapper[4766]: I1213 03:58:09.684406 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbdpw" Dec 13 03:58:09 crc kubenswrapper[4766]: I1213 03:58:09.732654 4766 patch_prober.go:28] interesting pod/machine-config-daemon-94w9l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 03:58:09 crc kubenswrapper[4766]: I1213 03:58:09.732749 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 03:58:09 crc kubenswrapper[4766]: I1213 03:58:09.969511 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xdldq" Dec 13 03:58:10 crc kubenswrapper[4766]: I1213 03:58:10.176071 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cbdpw"] Dec 13 03:58:10 crc kubenswrapper[4766]: W1213 03:58:10.180279 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7309ad5b_bf11_4e31_981f_d1608fd6364d.slice/crio-0043c1701ae1267fa8567455396e75344827ed52abb5aab7a2f919d07491253e WatchSource:0}: Error finding container 0043c1701ae1267fa8567455396e75344827ed52abb5aab7a2f919d07491253e: Status 404 returned error can't find the container with id 0043c1701ae1267fa8567455396e75344827ed52abb5aab7a2f919d07491253e Dec 13 03:58:10 crc kubenswrapper[4766]: I1213 03:58:10.300810 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbdpw" event={"ID":"7309ad5b-bf11-4e31-981f-d1608fd6364d","Type":"ContainerStarted","Data":"0043c1701ae1267fa8567455396e75344827ed52abb5aab7a2f919d07491253e"} Dec 13 03:58:10 crc kubenswrapper[4766]: I1213 03:58:10.304595 4766 generic.go:334] "Generic (PLEG): container finished" podID="dd2797cc-1780-4828-b835-7bde5a0de2c4" containerID="e153b939dae134282aac2013fbf11824921974f27fcea17ae791eb8880a71dce" exitCode=0 Dec 13 03:58:10 crc kubenswrapper[4766]: I1213 03:58:10.304625 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs" event={"ID":"dd2797cc-1780-4828-b835-7bde5a0de2c4","Type":"ContainerDied","Data":"e153b939dae134282aac2013fbf11824921974f27fcea17ae791eb8880a71dce"} Dec 13 03:58:11 crc kubenswrapper[4766]: I1213 03:58:11.314376 4766 generic.go:334] "Generic (PLEG): container finished" podID="7309ad5b-bf11-4e31-981f-d1608fd6364d" containerID="facbd38ed78752983567bd37ed5653ee813eef969f895ac86d86734bc37339ef" exitCode=0 Dec 13 03:58:11 crc kubenswrapper[4766]: I1213 03:58:11.314747 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbdpw" event={"ID":"7309ad5b-bf11-4e31-981f-d1608fd6364d","Type":"ContainerDied","Data":"facbd38ed78752983567bd37ed5653ee813eef969f895ac86d86734bc37339ef"} Dec 13 03:58:11 crc kubenswrapper[4766]: I1213 03:58:11.317991 4766 generic.go:334] "Generic (PLEG): container finished" podID="dd2797cc-1780-4828-b835-7bde5a0de2c4" containerID="3ab1fffa12e7768cb56f3e211c93e0b216f71c68786c4177c825f0759f7f4f64" exitCode=0 Dec 13 03:58:11 crc kubenswrapper[4766]: I1213 03:58:11.318022 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs" event={"ID":"dd2797cc-1780-4828-b835-7bde5a0de2c4","Type":"ContainerDied","Data":"3ab1fffa12e7768cb56f3e211c93e0b216f71c68786c4177c825f0759f7f4f64"} Dec 13 03:58:12 crc kubenswrapper[4766]: I1213 03:58:12.324349 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbdpw" event={"ID":"7309ad5b-bf11-4e31-981f-d1608fd6364d","Type":"ContainerStarted","Data":"f52acc1810f91f8f8bf73426be04ca08627728655e4b910eea865d3786365d93"} Dec 13 03:58:12 crc kubenswrapper[4766]: I1213 03:58:12.664976 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs" Dec 13 03:58:12 crc kubenswrapper[4766]: I1213 03:58:12.858365 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lccqf\" (UniqueName: \"kubernetes.io/projected/dd2797cc-1780-4828-b835-7bde5a0de2c4-kube-api-access-lccqf\") pod \"dd2797cc-1780-4828-b835-7bde5a0de2c4\" (UID: \"dd2797cc-1780-4828-b835-7bde5a0de2c4\") " Dec 13 03:58:12 crc kubenswrapper[4766]: I1213 03:58:12.858547 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd2797cc-1780-4828-b835-7bde5a0de2c4-bundle\") pod \"dd2797cc-1780-4828-b835-7bde5a0de2c4\" (UID: \"dd2797cc-1780-4828-b835-7bde5a0de2c4\") " Dec 13 03:58:12 crc kubenswrapper[4766]: I1213 03:58:12.858593 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd2797cc-1780-4828-b835-7bde5a0de2c4-util\") pod \"dd2797cc-1780-4828-b835-7bde5a0de2c4\" (UID: \"dd2797cc-1780-4828-b835-7bde5a0de2c4\") " Dec 13 03:58:12 crc kubenswrapper[4766]: I1213 03:58:12.860184 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd2797cc-1780-4828-b835-7bde5a0de2c4-bundle" (OuterVolumeSpecName: "bundle") pod "dd2797cc-1780-4828-b835-7bde5a0de2c4" (UID: "dd2797cc-1780-4828-b835-7bde5a0de2c4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 03:58:12 crc kubenswrapper[4766]: I1213 03:58:12.868718 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd2797cc-1780-4828-b835-7bde5a0de2c4-kube-api-access-lccqf" (OuterVolumeSpecName: "kube-api-access-lccqf") pod "dd2797cc-1780-4828-b835-7bde5a0de2c4" (UID: "dd2797cc-1780-4828-b835-7bde5a0de2c4"). InnerVolumeSpecName "kube-api-access-lccqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:58:12 crc kubenswrapper[4766]: I1213 03:58:12.878415 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd2797cc-1780-4828-b835-7bde5a0de2c4-util" (OuterVolumeSpecName: "util") pod "dd2797cc-1780-4828-b835-7bde5a0de2c4" (UID: "dd2797cc-1780-4828-b835-7bde5a0de2c4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 03:58:12 crc kubenswrapper[4766]: I1213 03:58:12.960553 4766 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd2797cc-1780-4828-b835-7bde5a0de2c4-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 03:58:12 crc kubenswrapper[4766]: I1213 03:58:12.960595 4766 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd2797cc-1780-4828-b835-7bde5a0de2c4-util\") on node \"crc\" DevicePath \"\"" Dec 13 03:58:12 crc kubenswrapper[4766]: I1213 03:58:12.960605 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lccqf\" (UniqueName: \"kubernetes.io/projected/dd2797cc-1780-4828-b835-7bde5a0de2c4-kube-api-access-lccqf\") on node \"crc\" DevicePath \"\"" Dec 13 03:58:13 crc kubenswrapper[4766]: I1213 03:58:13.332350 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs" Dec 13 03:58:13 crc kubenswrapper[4766]: I1213 03:58:13.332347 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs" event={"ID":"dd2797cc-1780-4828-b835-7bde5a0de2c4","Type":"ContainerDied","Data":"cc18c45627d49c3414ad09da82d26f727241c9c33ec49a1ad5be128fcc95dd3b"} Dec 13 03:58:13 crc kubenswrapper[4766]: I1213 03:58:13.332474 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc18c45627d49c3414ad09da82d26f727241c9c33ec49a1ad5be128fcc95dd3b" Dec 13 03:58:13 crc kubenswrapper[4766]: I1213 03:58:13.335086 4766 generic.go:334] "Generic (PLEG): container finished" podID="7309ad5b-bf11-4e31-981f-d1608fd6364d" containerID="f52acc1810f91f8f8bf73426be04ca08627728655e4b910eea865d3786365d93" exitCode=0 Dec 13 03:58:13 crc kubenswrapper[4766]: I1213 03:58:13.335161 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbdpw" event={"ID":"7309ad5b-bf11-4e31-981f-d1608fd6364d","Type":"ContainerDied","Data":"f52acc1810f91f8f8bf73426be04ca08627728655e4b910eea865d3786365d93"} Dec 13 03:58:14 crc kubenswrapper[4766]: I1213 03:58:14.343123 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbdpw" event={"ID":"7309ad5b-bf11-4e31-981f-d1608fd6364d","Type":"ContainerStarted","Data":"6021b4255598bda2c6732407adcb2a64b9dbadf499d92b555dfdee86cd82c785"} Dec 13 03:58:19 crc kubenswrapper[4766]: I1213 03:58:19.685161 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cbdpw" Dec 13 03:58:19 crc kubenswrapper[4766]: I1213 03:58:19.685804 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cbdpw" Dec 13 03:58:20 crc kubenswrapper[4766]: I1213 03:58:20.734093 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cbdpw" podUID="7309ad5b-bf11-4e31-981f-d1608fd6364d" containerName="registry-server" probeResult="failure" output=< Dec 13 03:58:20 crc kubenswrapper[4766]: timeout: failed to connect service ":50051" within 1s Dec 13 03:58:20 crc kubenswrapper[4766]: > Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.095253 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cbdpw" podStartSLOduration=11.475303944 podStartE2EDuration="14.095233192s" podCreationTimestamp="2025-12-13 03:58:09 +0000 UTC" firstStartedPulling="2025-12-13 03:58:11.317005811 +0000 UTC m=+822.826938775" lastFinishedPulling="2025-12-13 03:58:13.936935059 +0000 UTC m=+825.446868023" observedRunningTime="2025-12-13 03:58:14.372950229 +0000 UTC m=+825.882883193" watchObservedRunningTime="2025-12-13 03:58:23.095233192 +0000 UTC m=+834.605166156" Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.099127 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6468b8b4bf-7j6nm"] Dec 13 03:58:23 crc kubenswrapper[4766]: E1213 03:58:23.099375 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2797cc-1780-4828-b835-7bde5a0de2c4" containerName="extract" Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.099398 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2797cc-1780-4828-b835-7bde5a0de2c4" containerName="extract" Dec 13 03:58:23 crc kubenswrapper[4766]: E1213 03:58:23.099426 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2797cc-1780-4828-b835-7bde5a0de2c4" containerName="util" Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.099434 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2797cc-1780-4828-b835-7bde5a0de2c4" containerName="util" Dec 13 03:58:23 crc kubenswrapper[4766]: E1213 03:58:23.099459 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2797cc-1780-4828-b835-7bde5a0de2c4" containerName="pull" Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.099465 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2797cc-1780-4828-b835-7bde5a0de2c4" containerName="pull" Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.099555 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd2797cc-1780-4828-b835-7bde5a0de2c4" containerName="extract" Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.099932 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6468b8b4bf-7j6nm" Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.103550 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.118099 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.118325 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.118393 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.118853 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-hw7l2" Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.145851 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6468b8b4bf-7j6nm"] Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.167207 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plch4\" (UniqueName: \"kubernetes.io/projected/92fe426f-df80-49bf-9259-7e04836a793f-kube-api-access-plch4\") pod \"metallb-operator-controller-manager-6468b8b4bf-7j6nm\" (UID: \"92fe426f-df80-49bf-9259-7e04836a793f\") " pod="metallb-system/metallb-operator-controller-manager-6468b8b4bf-7j6nm" Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.167354 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/92fe426f-df80-49bf-9259-7e04836a793f-webhook-cert\") pod \"metallb-operator-controller-manager-6468b8b4bf-7j6nm\" (UID: \"92fe426f-df80-49bf-9259-7e04836a793f\") " pod="metallb-system/metallb-operator-controller-manager-6468b8b4bf-7j6nm" Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.167442 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/92fe426f-df80-49bf-9259-7e04836a793f-apiservice-cert\") pod \"metallb-operator-controller-manager-6468b8b4bf-7j6nm\" (UID: \"92fe426f-df80-49bf-9259-7e04836a793f\") " pod="metallb-system/metallb-operator-controller-manager-6468b8b4bf-7j6nm" Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.268559 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/92fe426f-df80-49bf-9259-7e04836a793f-apiservice-cert\") pod \"metallb-operator-controller-manager-6468b8b4bf-7j6nm\" (UID: \"92fe426f-df80-49bf-9259-7e04836a793f\") " pod="metallb-system/metallb-operator-controller-manager-6468b8b4bf-7j6nm" Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.268662 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plch4\" (UniqueName: \"kubernetes.io/projected/92fe426f-df80-49bf-9259-7e04836a793f-kube-api-access-plch4\") pod \"metallb-operator-controller-manager-6468b8b4bf-7j6nm\" (UID: \"92fe426f-df80-49bf-9259-7e04836a793f\") " pod="metallb-system/metallb-operator-controller-manager-6468b8b4bf-7j6nm" Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.268702 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/92fe426f-df80-49bf-9259-7e04836a793f-webhook-cert\") pod \"metallb-operator-controller-manager-6468b8b4bf-7j6nm\" (UID: \"92fe426f-df80-49bf-9259-7e04836a793f\") " pod="metallb-system/metallb-operator-controller-manager-6468b8b4bf-7j6nm" Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.277354 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/92fe426f-df80-49bf-9259-7e04836a793f-webhook-cert\") pod \"metallb-operator-controller-manager-6468b8b4bf-7j6nm\" (UID: \"92fe426f-df80-49bf-9259-7e04836a793f\") " pod="metallb-system/metallb-operator-controller-manager-6468b8b4bf-7j6nm" Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.277499 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/92fe426f-df80-49bf-9259-7e04836a793f-apiservice-cert\") pod \"metallb-operator-controller-manager-6468b8b4bf-7j6nm\" (UID: \"92fe426f-df80-49bf-9259-7e04836a793f\") " pod="metallb-system/metallb-operator-controller-manager-6468b8b4bf-7j6nm" Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.302252 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plch4\" (UniqueName: \"kubernetes.io/projected/92fe426f-df80-49bf-9259-7e04836a793f-kube-api-access-plch4\") pod \"metallb-operator-controller-manager-6468b8b4bf-7j6nm\" (UID: \"92fe426f-df80-49bf-9259-7e04836a793f\") " pod="metallb-system/metallb-operator-controller-manager-6468b8b4bf-7j6nm" Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.429033 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6468b8b4bf-7j6nm" Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.656071 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7fb5f44fc8-wqts7"] Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.688818 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7fb5f44fc8-wqts7"] Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.689426 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7fb5f44fc8-wqts7" Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.697986 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-7nhks" Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.698730 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.701639 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.807517 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6468b8b4bf-7j6nm"] Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.880554 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v477g\" (UniqueName: \"kubernetes.io/projected/b43cb171-915c-4cb3-bc9b-1525fe72213e-kube-api-access-v477g\") pod \"metallb-operator-webhook-server-7fb5f44fc8-wqts7\" (UID: \"b43cb171-915c-4cb3-bc9b-1525fe72213e\") " pod="metallb-system/metallb-operator-webhook-server-7fb5f44fc8-wqts7" Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.881063 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b43cb171-915c-4cb3-bc9b-1525fe72213e-apiservice-cert\") pod \"metallb-operator-webhook-server-7fb5f44fc8-wqts7\" (UID: \"b43cb171-915c-4cb3-bc9b-1525fe72213e\") " pod="metallb-system/metallb-operator-webhook-server-7fb5f44fc8-wqts7" Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.881144 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b43cb171-915c-4cb3-bc9b-1525fe72213e-webhook-cert\") pod \"metallb-operator-webhook-server-7fb5f44fc8-wqts7\" (UID: \"b43cb171-915c-4cb3-bc9b-1525fe72213e\") " pod="metallb-system/metallb-operator-webhook-server-7fb5f44fc8-wqts7" Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.982559 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v477g\" (UniqueName: \"kubernetes.io/projected/b43cb171-915c-4cb3-bc9b-1525fe72213e-kube-api-access-v477g\") pod \"metallb-operator-webhook-server-7fb5f44fc8-wqts7\" (UID: \"b43cb171-915c-4cb3-bc9b-1525fe72213e\") " pod="metallb-system/metallb-operator-webhook-server-7fb5f44fc8-wqts7" Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.982644 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b43cb171-915c-4cb3-bc9b-1525fe72213e-apiservice-cert\") pod \"metallb-operator-webhook-server-7fb5f44fc8-wqts7\" (UID: \"b43cb171-915c-4cb3-bc9b-1525fe72213e\") " pod="metallb-system/metallb-operator-webhook-server-7fb5f44fc8-wqts7" Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.982695 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b43cb171-915c-4cb3-bc9b-1525fe72213e-webhook-cert\") pod \"metallb-operator-webhook-server-7fb5f44fc8-wqts7\" (UID: \"b43cb171-915c-4cb3-bc9b-1525fe72213e\") " pod="metallb-system/metallb-operator-webhook-server-7fb5f44fc8-wqts7" Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.990784 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b43cb171-915c-4cb3-bc9b-1525fe72213e-apiservice-cert\") pod \"metallb-operator-webhook-server-7fb5f44fc8-wqts7\" (UID: \"b43cb171-915c-4cb3-bc9b-1525fe72213e\") " pod="metallb-system/metallb-operator-webhook-server-7fb5f44fc8-wqts7" Dec 13 03:58:23 crc kubenswrapper[4766]: I1213 03:58:23.991381 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b43cb171-915c-4cb3-bc9b-1525fe72213e-webhook-cert\") pod \"metallb-operator-webhook-server-7fb5f44fc8-wqts7\" (UID: \"b43cb171-915c-4cb3-bc9b-1525fe72213e\") " pod="metallb-system/metallb-operator-webhook-server-7fb5f44fc8-wqts7" Dec 13 03:58:24 crc kubenswrapper[4766]: I1213 03:58:24.008469 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v477g\" (UniqueName: \"kubernetes.io/projected/b43cb171-915c-4cb3-bc9b-1525fe72213e-kube-api-access-v477g\") pod \"metallb-operator-webhook-server-7fb5f44fc8-wqts7\" (UID: \"b43cb171-915c-4cb3-bc9b-1525fe72213e\") " pod="metallb-system/metallb-operator-webhook-server-7fb5f44fc8-wqts7" Dec 13 03:58:24 crc kubenswrapper[4766]: I1213 03:58:24.044758 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7fb5f44fc8-wqts7" Dec 13 03:58:24 crc kubenswrapper[4766]: I1213 03:58:24.407530 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6468b8b4bf-7j6nm" event={"ID":"92fe426f-df80-49bf-9259-7e04836a793f","Type":"ContainerStarted","Data":"731f89712659925931576e63fdce51bd03b2ee8cbf1149f1b47c76e9ae6ddfea"} Dec 13 03:58:24 crc kubenswrapper[4766]: I1213 03:58:24.427611 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7fb5f44fc8-wqts7"] Dec 13 03:58:24 crc kubenswrapper[4766]: W1213 03:58:24.449278 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb43cb171_915c_4cb3_bc9b_1525fe72213e.slice/crio-d3d8cca4752808d06bd55cddf7fe56a44c7821e8e53e39e0904921e390a57874 WatchSource:0}: Error finding container d3d8cca4752808d06bd55cddf7fe56a44c7821e8e53e39e0904921e390a57874: Status 404 returned error can't find the container with id d3d8cca4752808d06bd55cddf7fe56a44c7821e8e53e39e0904921e390a57874 Dec 13 03:58:25 crc kubenswrapper[4766]: I1213 03:58:25.415869 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7fb5f44fc8-wqts7" event={"ID":"b43cb171-915c-4cb3-bc9b-1525fe72213e","Type":"ContainerStarted","Data":"d3d8cca4752808d06bd55cddf7fe56a44c7821e8e53e39e0904921e390a57874"} Dec 13 03:58:29 crc kubenswrapper[4766]: I1213 03:58:29.804880 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cbdpw" Dec 13 03:58:29 crc kubenswrapper[4766]: I1213 03:58:29.878739 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cbdpw" Dec 13 03:58:30 crc kubenswrapper[4766]: I1213 03:58:30.274206 4766 scope.go:117] "RemoveContainer" containerID="40ada8692e77f3fdcfb7af0831f398beba03791a2e7286360296b007c809a329" Dec 13 03:58:30 crc kubenswrapper[4766]: I1213 03:58:30.648987 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cbdpw"] Dec 13 03:58:31 crc kubenswrapper[4766]: I1213 03:58:31.456218 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cbdpw" podUID="7309ad5b-bf11-4e31-981f-d1608fd6364d" containerName="registry-server" containerID="cri-o://6021b4255598bda2c6732407adcb2a64b9dbadf499d92b555dfdee86cd82c785" gracePeriod=2 Dec 13 03:58:32 crc kubenswrapper[4766]: I1213 03:58:32.555921 4766 generic.go:334] "Generic (PLEG): container finished" podID="7309ad5b-bf11-4e31-981f-d1608fd6364d" containerID="6021b4255598bda2c6732407adcb2a64b9dbadf499d92b555dfdee86cd82c785" exitCode=0 Dec 13 03:58:32 crc kubenswrapper[4766]: I1213 03:58:32.556063 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbdpw" event={"ID":"7309ad5b-bf11-4e31-981f-d1608fd6364d","Type":"ContainerDied","Data":"6021b4255598bda2c6732407adcb2a64b9dbadf499d92b555dfdee86cd82c785"} Dec 13 03:58:34 crc kubenswrapper[4766]: I1213 03:58:34.514058 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbdpw" Dec 13 03:58:34 crc kubenswrapper[4766]: I1213 03:58:34.590199 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbdpw" event={"ID":"7309ad5b-bf11-4e31-981f-d1608fd6364d","Type":"ContainerDied","Data":"0043c1701ae1267fa8567455396e75344827ed52abb5aab7a2f919d07491253e"} Dec 13 03:58:34 crc kubenswrapper[4766]: I1213 03:58:34.590784 4766 scope.go:117] "RemoveContainer" containerID="6021b4255598bda2c6732407adcb2a64b9dbadf499d92b555dfdee86cd82c785" Dec 13 03:58:34 crc kubenswrapper[4766]: I1213 03:58:34.590532 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbdpw" Dec 13 03:58:34 crc kubenswrapper[4766]: I1213 03:58:34.593612 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6n4vc_b724d1e1-9ded-434e-b852-f5233f27ef32/kube-multus/2.log" Dec 13 03:58:34 crc kubenswrapper[4766]: I1213 03:58:34.637293 4766 scope.go:117] "RemoveContainer" containerID="f52acc1810f91f8f8bf73426be04ca08627728655e4b910eea865d3786365d93" Dec 13 03:58:34 crc kubenswrapper[4766]: I1213 03:58:34.666518 4766 scope.go:117] "RemoveContainer" containerID="facbd38ed78752983567bd37ed5653ee813eef969f895ac86d86734bc37339ef" Dec 13 03:58:34 crc kubenswrapper[4766]: I1213 03:58:34.712856 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7309ad5b-bf11-4e31-981f-d1608fd6364d-utilities\") pod \"7309ad5b-bf11-4e31-981f-d1608fd6364d\" (UID: \"7309ad5b-bf11-4e31-981f-d1608fd6364d\") " Dec 13 03:58:34 crc kubenswrapper[4766]: I1213 03:58:34.713369 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7309ad5b-bf11-4e31-981f-d1608fd6364d-catalog-content\") pod \"7309ad5b-bf11-4e31-981f-d1608fd6364d\" (UID: \"7309ad5b-bf11-4e31-981f-d1608fd6364d\") " Dec 13 03:58:34 crc kubenswrapper[4766]: I1213 03:58:34.713420 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn47q\" (UniqueName: \"kubernetes.io/projected/7309ad5b-bf11-4e31-981f-d1608fd6364d-kube-api-access-nn47q\") pod \"7309ad5b-bf11-4e31-981f-d1608fd6364d\" (UID: \"7309ad5b-bf11-4e31-981f-d1608fd6364d\") " Dec 13 03:58:34 crc kubenswrapper[4766]: I1213 03:58:34.713885 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7309ad5b-bf11-4e31-981f-d1608fd6364d-utilities" (OuterVolumeSpecName: "utilities") pod "7309ad5b-bf11-4e31-981f-d1608fd6364d" (UID: "7309ad5b-bf11-4e31-981f-d1608fd6364d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 03:58:34 crc kubenswrapper[4766]: I1213 03:58:34.715341 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7309ad5b-bf11-4e31-981f-d1608fd6364d-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 03:58:34 crc kubenswrapper[4766]: I1213 03:58:34.720381 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7309ad5b-bf11-4e31-981f-d1608fd6364d-kube-api-access-nn47q" (OuterVolumeSpecName: "kube-api-access-nn47q") pod "7309ad5b-bf11-4e31-981f-d1608fd6364d" (UID: "7309ad5b-bf11-4e31-981f-d1608fd6364d"). InnerVolumeSpecName "kube-api-access-nn47q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:58:34 crc kubenswrapper[4766]: I1213 03:58:34.815835 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn47q\" (UniqueName: \"kubernetes.io/projected/7309ad5b-bf11-4e31-981f-d1608fd6364d-kube-api-access-nn47q\") on node \"crc\" DevicePath \"\"" Dec 13 03:58:34 crc kubenswrapper[4766]: I1213 03:58:34.837082 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7309ad5b-bf11-4e31-981f-d1608fd6364d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7309ad5b-bf11-4e31-981f-d1608fd6364d" (UID: "7309ad5b-bf11-4e31-981f-d1608fd6364d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 03:58:34 crc kubenswrapper[4766]: I1213 03:58:34.916807 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7309ad5b-bf11-4e31-981f-d1608fd6364d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 03:58:34 crc kubenswrapper[4766]: I1213 03:58:34.930340 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cbdpw"] Dec 13 03:58:34 crc kubenswrapper[4766]: I1213 03:58:34.931998 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cbdpw"] Dec 13 03:58:35 crc kubenswrapper[4766]: I1213 03:58:35.605587 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6468b8b4bf-7j6nm" event={"ID":"92fe426f-df80-49bf-9259-7e04836a793f","Type":"ContainerStarted","Data":"a1c99346806c7e933784c81842c874ac2ed080d3d30cb4fe7aebbbc21d36b5a0"} Dec 13 03:58:35 crc kubenswrapper[4766]: I1213 03:58:35.605827 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6468b8b4bf-7j6nm" Dec 13 03:58:35 crc kubenswrapper[4766]: I1213 03:58:35.607597 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7fb5f44fc8-wqts7" event={"ID":"b43cb171-915c-4cb3-bc9b-1525fe72213e","Type":"ContainerStarted","Data":"1eca4b545d8f12642ba0a9fad6b281da93ae236b7e09be164318c0a56c092f4f"} Dec 13 03:58:35 crc kubenswrapper[4766]: I1213 03:58:35.607744 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7fb5f44fc8-wqts7" Dec 13 03:58:35 crc kubenswrapper[4766]: I1213 03:58:35.623743 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7309ad5b-bf11-4e31-981f-d1608fd6364d" path="/var/lib/kubelet/pods/7309ad5b-bf11-4e31-981f-d1608fd6364d/volumes" Dec 13 03:58:35 crc kubenswrapper[4766]: I1213 03:58:35.643942 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6468b8b4bf-7j6nm" podStartSLOduration=2.186280953 podStartE2EDuration="12.643917667s" podCreationTimestamp="2025-12-13 03:58:23 +0000 UTC" firstStartedPulling="2025-12-13 03:58:23.830606141 +0000 UTC m=+835.340539105" lastFinishedPulling="2025-12-13 03:58:34.288242855 +0000 UTC m=+845.798175819" observedRunningTime="2025-12-13 03:58:35.639935973 +0000 UTC m=+847.149868937" watchObservedRunningTime="2025-12-13 03:58:35.643917667 +0000 UTC m=+847.153850641" Dec 13 03:58:35 crc kubenswrapper[4766]: I1213 03:58:35.668112 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7fb5f44fc8-wqts7" podStartSLOduration=2.831410333 podStartE2EDuration="12.668093821s" podCreationTimestamp="2025-12-13 03:58:23 +0000 UTC" firstStartedPulling="2025-12-13 03:58:24.453522463 +0000 UTC m=+835.963455427" lastFinishedPulling="2025-12-13 03:58:34.290205951 +0000 UTC m=+845.800138915" observedRunningTime="2025-12-13 03:58:35.66529581 +0000 UTC m=+847.175228774" watchObservedRunningTime="2025-12-13 03:58:35.668093821 +0000 UTC m=+847.178026785" Dec 13 03:58:39 crc kubenswrapper[4766]: I1213 03:58:39.732174 4766 patch_prober.go:28] interesting pod/machine-config-daemon-94w9l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 03:58:39 crc kubenswrapper[4766]: I1213 03:58:39.732785 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 03:58:39 crc kubenswrapper[4766]: I1213 03:58:39.732875 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" Dec 13 03:58:39 crc kubenswrapper[4766]: I1213 03:58:39.733743 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f2151a06f5707b72e56cc0032442b9fe647442317230e16d90226a37ee92ba85"} pod="openshift-machine-config-operator/machine-config-daemon-94w9l" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 13 03:58:39 crc kubenswrapper[4766]: I1213 03:58:39.733838 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" containerID="cri-o://f2151a06f5707b72e56cc0032442b9fe647442317230e16d90226a37ee92ba85" gracePeriod=600 Dec 13 03:58:40 crc kubenswrapper[4766]: I1213 03:58:40.652908 4766 generic.go:334] "Generic (PLEG): container finished" podID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerID="f2151a06f5707b72e56cc0032442b9fe647442317230e16d90226a37ee92ba85" exitCode=0 Dec 13 03:58:40 crc kubenswrapper[4766]: I1213 03:58:40.653711 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" event={"ID":"71e6a48b-4f5d-4299-9c7b-98dbe11e670e","Type":"ContainerDied","Data":"f2151a06f5707b72e56cc0032442b9fe647442317230e16d90226a37ee92ba85"} Dec 13 03:58:40 crc kubenswrapper[4766]: I1213 03:58:40.653875 4766 scope.go:117] "RemoveContainer" containerID="3581901a9e5d232fe2bee9f853467e0cc1f606b0f931756d9ef9ab621fab6fda" Dec 13 03:58:41 crc kubenswrapper[4766]: I1213 03:58:41.661745 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" event={"ID":"71e6a48b-4f5d-4299-9c7b-98dbe11e670e","Type":"ContainerStarted","Data":"6f383501fdc030c21904f55bdc8f043a9f6b6848b9b670e0fc026beaf3079e7c"} Dec 13 03:58:44 crc kubenswrapper[4766]: I1213 03:58:44.054964 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7fb5f44fc8-wqts7" Dec 13 03:59:13 crc kubenswrapper[4766]: I1213 03:59:13.432463 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6468b8b4bf-7j6nm" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.296536 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-f2nj6"] Dec 13 03:59:14 crc kubenswrapper[4766]: E1213 03:59:14.297352 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7309ad5b-bf11-4e31-981f-d1608fd6364d" containerName="extract-utilities" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.297401 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7309ad5b-bf11-4e31-981f-d1608fd6364d" containerName="extract-utilities" Dec 13 03:59:14 crc kubenswrapper[4766]: E1213 03:59:14.297423 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7309ad5b-bf11-4e31-981f-d1608fd6364d" containerName="extract-content" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.297453 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7309ad5b-bf11-4e31-981f-d1608fd6364d" containerName="extract-content" Dec 13 03:59:14 crc kubenswrapper[4766]: E1213 03:59:14.297473 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7309ad5b-bf11-4e31-981f-d1608fd6364d" containerName="registry-server" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.297482 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7309ad5b-bf11-4e31-981f-d1608fd6364d" containerName="registry-server" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.297658 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7309ad5b-bf11-4e31-981f-d1608fd6364d" containerName="registry-server" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.300318 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-f2nj6" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.301810 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-98sqt"] Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.303685 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-lbcx2" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.303854 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.304164 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-98sqt" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.311134 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.315767 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.319119 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-98sqt"] Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.371184 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vggnx\" (UniqueName: \"kubernetes.io/projected/3873c53e-f61e-4d7e-bfe8-5f43ad0c49c5-kube-api-access-vggnx\") pod \"frr-k8s-webhook-server-7784b6fcf-98sqt\" (UID: \"3873c53e-f61e-4d7e-bfe8-5f43ad0c49c5\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-98sqt" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.371293 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3873c53e-f61e-4d7e-bfe8-5f43ad0c49c5-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-98sqt\" (UID: \"3873c53e-f61e-4d7e-bfe8-5f43ad0c49c5\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-98sqt" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.412643 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-5bddd4b946-4pshx"] Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.413814 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-4pshx" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.416367 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-vft9m"] Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.417420 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-vft9m" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.417564 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.423832 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.423935 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.423988 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.423990 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-vz4bm" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.432740 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-4pshx"] Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.473287 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/32d33161-c294-4ece-8783-25472b4ac4b7-frr-startup\") pod \"frr-k8s-f2nj6\" (UID: \"32d33161-c294-4ece-8783-25472b4ac4b7\") " pod="metallb-system/frr-k8s-f2nj6" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.473371 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3873c53e-f61e-4d7e-bfe8-5f43ad0c49c5-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-98sqt\" (UID: \"3873c53e-f61e-4d7e-bfe8-5f43ad0c49c5\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-98sqt" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.473408 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/32d33161-c294-4ece-8783-25472b4ac4b7-metrics\") pod \"frr-k8s-f2nj6\" (UID: \"32d33161-c294-4ece-8783-25472b4ac4b7\") " pod="metallb-system/frr-k8s-f2nj6" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.473459 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/32d33161-c294-4ece-8783-25472b4ac4b7-frr-sockets\") pod \"frr-k8s-f2nj6\" (UID: \"32d33161-c294-4ece-8783-25472b4ac4b7\") " pod="metallb-system/frr-k8s-f2nj6" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.473502 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/32d33161-c294-4ece-8783-25472b4ac4b7-reloader\") pod \"frr-k8s-f2nj6\" (UID: \"32d33161-c294-4ece-8783-25472b4ac4b7\") " pod="metallb-system/frr-k8s-f2nj6" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.473543 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62trf\" (UniqueName: \"kubernetes.io/projected/32d33161-c294-4ece-8783-25472b4ac4b7-kube-api-access-62trf\") pod \"frr-k8s-f2nj6\" (UID: \"32d33161-c294-4ece-8783-25472b4ac4b7\") " pod="metallb-system/frr-k8s-f2nj6" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.473574 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/32d33161-c294-4ece-8783-25472b4ac4b7-frr-conf\") pod \"frr-k8s-f2nj6\" (UID: \"32d33161-c294-4ece-8783-25472b4ac4b7\") " pod="metallb-system/frr-k8s-f2nj6" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.473594 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32d33161-c294-4ece-8783-25472b4ac4b7-metrics-certs\") pod \"frr-k8s-f2nj6\" (UID: \"32d33161-c294-4ece-8783-25472b4ac4b7\") " pod="metallb-system/frr-k8s-f2nj6" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.473622 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vggnx\" (UniqueName: \"kubernetes.io/projected/3873c53e-f61e-4d7e-bfe8-5f43ad0c49c5-kube-api-access-vggnx\") pod \"frr-k8s-webhook-server-7784b6fcf-98sqt\" (UID: \"3873c53e-f61e-4d7e-bfe8-5f43ad0c49c5\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-98sqt" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.481674 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3873c53e-f61e-4d7e-bfe8-5f43ad0c49c5-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-98sqt\" (UID: \"3873c53e-f61e-4d7e-bfe8-5f43ad0c49c5\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-98sqt" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.503090 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vggnx\" (UniqueName: \"kubernetes.io/projected/3873c53e-f61e-4d7e-bfe8-5f43ad0c49c5-kube-api-access-vggnx\") pod \"frr-k8s-webhook-server-7784b6fcf-98sqt\" (UID: \"3873c53e-f61e-4d7e-bfe8-5f43ad0c49c5\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-98sqt" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.575509 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32d33161-c294-4ece-8783-25472b4ac4b7-metrics-certs\") pod \"frr-k8s-f2nj6\" (UID: \"32d33161-c294-4ece-8783-25472b4ac4b7\") " pod="metallb-system/frr-k8s-f2nj6" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.575575 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49cd51a9-7aa8-4927-b8ef-0dc5f08bbd81-metrics-certs\") pod \"controller-5bddd4b946-4pshx\" (UID: \"49cd51a9-7aa8-4927-b8ef-0dc5f08bbd81\") " pod="metallb-system/controller-5bddd4b946-4pshx" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.575603 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fbd9aada-00ed-49a4-94b7-d96c1014fcfe-metrics-certs\") pod \"speaker-vft9m\" (UID: \"fbd9aada-00ed-49a4-94b7-d96c1014fcfe\") " pod="metallb-system/speaker-vft9m" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.575627 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z8q8\" (UniqueName: \"kubernetes.io/projected/fbd9aada-00ed-49a4-94b7-d96c1014fcfe-kube-api-access-4z8q8\") pod \"speaker-vft9m\" (UID: \"fbd9aada-00ed-49a4-94b7-d96c1014fcfe\") " pod="metallb-system/speaker-vft9m" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.575686 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/fbd9aada-00ed-49a4-94b7-d96c1014fcfe-metallb-excludel2\") pod \"speaker-vft9m\" (UID: \"fbd9aada-00ed-49a4-94b7-d96c1014fcfe\") " pod="metallb-system/speaker-vft9m" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.575721 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/32d33161-c294-4ece-8783-25472b4ac4b7-frr-startup\") pod \"frr-k8s-f2nj6\" (UID: \"32d33161-c294-4ece-8783-25472b4ac4b7\") " pod="metallb-system/frr-k8s-f2nj6" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.575752 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/32d33161-c294-4ece-8783-25472b4ac4b7-metrics\") pod \"frr-k8s-f2nj6\" (UID: \"32d33161-c294-4ece-8783-25472b4ac4b7\") " pod="metallb-system/frr-k8s-f2nj6" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.575780 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/32d33161-c294-4ece-8783-25472b4ac4b7-frr-sockets\") pod \"frr-k8s-f2nj6\" (UID: \"32d33161-c294-4ece-8783-25472b4ac4b7\") " pod="metallb-system/frr-k8s-f2nj6" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.575827 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/32d33161-c294-4ece-8783-25472b4ac4b7-reloader\") pod \"frr-k8s-f2nj6\" (UID: \"32d33161-c294-4ece-8783-25472b4ac4b7\") " pod="metallb-system/frr-k8s-f2nj6" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.575854 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fbd9aada-00ed-49a4-94b7-d96c1014fcfe-memberlist\") pod \"speaker-vft9m\" (UID: \"fbd9aada-00ed-49a4-94b7-d96c1014fcfe\") " pod="metallb-system/speaker-vft9m" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.575887 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49cd51a9-7aa8-4927-b8ef-0dc5f08bbd81-cert\") pod \"controller-5bddd4b946-4pshx\" (UID: \"49cd51a9-7aa8-4927-b8ef-0dc5f08bbd81\") " pod="metallb-system/controller-5bddd4b946-4pshx" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.575920 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62trf\" (UniqueName: \"kubernetes.io/projected/32d33161-c294-4ece-8783-25472b4ac4b7-kube-api-access-62trf\") pod \"frr-k8s-f2nj6\" (UID: \"32d33161-c294-4ece-8783-25472b4ac4b7\") " pod="metallb-system/frr-k8s-f2nj6" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.575945 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78fgk\" (UniqueName: \"kubernetes.io/projected/49cd51a9-7aa8-4927-b8ef-0dc5f08bbd81-kube-api-access-78fgk\") pod \"controller-5bddd4b946-4pshx\" (UID: \"49cd51a9-7aa8-4927-b8ef-0dc5f08bbd81\") " pod="metallb-system/controller-5bddd4b946-4pshx" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.575974 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/32d33161-c294-4ece-8783-25472b4ac4b7-frr-conf\") pod \"frr-k8s-f2nj6\" (UID: \"32d33161-c294-4ece-8783-25472b4ac4b7\") " pod="metallb-system/frr-k8s-f2nj6" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.576537 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/32d33161-c294-4ece-8783-25472b4ac4b7-frr-conf\") pod \"frr-k8s-f2nj6\" (UID: \"32d33161-c294-4ece-8783-25472b4ac4b7\") " pod="metallb-system/frr-k8s-f2nj6" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.577336 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/32d33161-c294-4ece-8783-25472b4ac4b7-metrics\") pod \"frr-k8s-f2nj6\" (UID: \"32d33161-c294-4ece-8783-25472b4ac4b7\") " pod="metallb-system/frr-k8s-f2nj6" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.577696 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/32d33161-c294-4ece-8783-25472b4ac4b7-frr-sockets\") pod \"frr-k8s-f2nj6\" (UID: \"32d33161-c294-4ece-8783-25472b4ac4b7\") " pod="metallb-system/frr-k8s-f2nj6" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.577992 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/32d33161-c294-4ece-8783-25472b4ac4b7-reloader\") pod \"frr-k8s-f2nj6\" (UID: \"32d33161-c294-4ece-8783-25472b4ac4b7\") " pod="metallb-system/frr-k8s-f2nj6" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.578098 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/32d33161-c294-4ece-8783-25472b4ac4b7-frr-startup\") pod \"frr-k8s-f2nj6\" (UID: \"32d33161-c294-4ece-8783-25472b4ac4b7\") " pod="metallb-system/frr-k8s-f2nj6" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.580493 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32d33161-c294-4ece-8783-25472b4ac4b7-metrics-certs\") pod \"frr-k8s-f2nj6\" (UID: \"32d33161-c294-4ece-8783-25472b4ac4b7\") " pod="metallb-system/frr-k8s-f2nj6" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.596592 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62trf\" (UniqueName: \"kubernetes.io/projected/32d33161-c294-4ece-8783-25472b4ac4b7-kube-api-access-62trf\") pod \"frr-k8s-f2nj6\" (UID: \"32d33161-c294-4ece-8783-25472b4ac4b7\") " pod="metallb-system/frr-k8s-f2nj6" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.624786 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-f2nj6" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.653574 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-98sqt" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.677876 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fbd9aada-00ed-49a4-94b7-d96c1014fcfe-memberlist\") pod \"speaker-vft9m\" (UID: \"fbd9aada-00ed-49a4-94b7-d96c1014fcfe\") " pod="metallb-system/speaker-vft9m" Dec 13 03:59:14 crc kubenswrapper[4766]: E1213 03:59:14.678028 4766 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.678047 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49cd51a9-7aa8-4927-b8ef-0dc5f08bbd81-cert\") pod \"controller-5bddd4b946-4pshx\" (UID: \"49cd51a9-7aa8-4927-b8ef-0dc5f08bbd81\") " pod="metallb-system/controller-5bddd4b946-4pshx" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.678119 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78fgk\" (UniqueName: \"kubernetes.io/projected/49cd51a9-7aa8-4927-b8ef-0dc5f08bbd81-kube-api-access-78fgk\") pod \"controller-5bddd4b946-4pshx\" (UID: \"49cd51a9-7aa8-4927-b8ef-0dc5f08bbd81\") " pod="metallb-system/controller-5bddd4b946-4pshx" Dec 13 03:59:14 crc kubenswrapper[4766]: E1213 03:59:14.678210 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fbd9aada-00ed-49a4-94b7-d96c1014fcfe-memberlist podName:fbd9aada-00ed-49a4-94b7-d96c1014fcfe nodeName:}" failed. No retries permitted until 2025-12-13 03:59:15.178130965 +0000 UTC m=+886.688063929 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/fbd9aada-00ed-49a4-94b7-d96c1014fcfe-memberlist") pod "speaker-vft9m" (UID: "fbd9aada-00ed-49a4-94b7-d96c1014fcfe") : secret "metallb-memberlist" not found Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.678267 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fbd9aada-00ed-49a4-94b7-d96c1014fcfe-metrics-certs\") pod \"speaker-vft9m\" (UID: \"fbd9aada-00ed-49a4-94b7-d96c1014fcfe\") " pod="metallb-system/speaker-vft9m" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.678303 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z8q8\" (UniqueName: \"kubernetes.io/projected/fbd9aada-00ed-49a4-94b7-d96c1014fcfe-kube-api-access-4z8q8\") pod \"speaker-vft9m\" (UID: \"fbd9aada-00ed-49a4-94b7-d96c1014fcfe\") " pod="metallb-system/speaker-vft9m" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.678354 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49cd51a9-7aa8-4927-b8ef-0dc5f08bbd81-metrics-certs\") pod \"controller-5bddd4b946-4pshx\" (UID: \"49cd51a9-7aa8-4927-b8ef-0dc5f08bbd81\") " pod="metallb-system/controller-5bddd4b946-4pshx" Dec 13 03:59:14 crc kubenswrapper[4766]: E1213 03:59:14.678750 4766 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Dec 13 03:59:14 crc kubenswrapper[4766]: E1213 03:59:14.678852 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49cd51a9-7aa8-4927-b8ef-0dc5f08bbd81-metrics-certs podName:49cd51a9-7aa8-4927-b8ef-0dc5f08bbd81 nodeName:}" failed. No retries permitted until 2025-12-13 03:59:15.178834995 +0000 UTC m=+886.688768029 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/49cd51a9-7aa8-4927-b8ef-0dc5f08bbd81-metrics-certs") pod "controller-5bddd4b946-4pshx" (UID: "49cd51a9-7aa8-4927-b8ef-0dc5f08bbd81") : secret "controller-certs-secret" not found Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.678799 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/fbd9aada-00ed-49a4-94b7-d96c1014fcfe-metallb-excludel2\") pod \"speaker-vft9m\" (UID: \"fbd9aada-00ed-49a4-94b7-d96c1014fcfe\") " pod="metallb-system/speaker-vft9m" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.679863 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/fbd9aada-00ed-49a4-94b7-d96c1014fcfe-metallb-excludel2\") pod \"speaker-vft9m\" (UID: \"fbd9aada-00ed-49a4-94b7-d96c1014fcfe\") " pod="metallb-system/speaker-vft9m" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.681506 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.684064 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fbd9aada-00ed-49a4-94b7-d96c1014fcfe-metrics-certs\") pod \"speaker-vft9m\" (UID: \"fbd9aada-00ed-49a4-94b7-d96c1014fcfe\") " pod="metallb-system/speaker-vft9m" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.696768 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78fgk\" (UniqueName: \"kubernetes.io/projected/49cd51a9-7aa8-4927-b8ef-0dc5f08bbd81-kube-api-access-78fgk\") pod \"controller-5bddd4b946-4pshx\" (UID: \"49cd51a9-7aa8-4927-b8ef-0dc5f08bbd81\") " pod="metallb-system/controller-5bddd4b946-4pshx" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.697030 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49cd51a9-7aa8-4927-b8ef-0dc5f08bbd81-cert\") pod \"controller-5bddd4b946-4pshx\" (UID: \"49cd51a9-7aa8-4927-b8ef-0dc5f08bbd81\") " pod="metallb-system/controller-5bddd4b946-4pshx" Dec 13 03:59:14 crc kubenswrapper[4766]: I1213 03:59:14.698605 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z8q8\" (UniqueName: \"kubernetes.io/projected/fbd9aada-00ed-49a4-94b7-d96c1014fcfe-kube-api-access-4z8q8\") pod \"speaker-vft9m\" (UID: \"fbd9aada-00ed-49a4-94b7-d96c1014fcfe\") " pod="metallb-system/speaker-vft9m" Dec 13 03:59:15 crc kubenswrapper[4766]: I1213 03:59:15.084009 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-98sqt"] Dec 13 03:59:15 crc kubenswrapper[4766]: W1213 03:59:15.094773 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3873c53e_f61e_4d7e_bfe8_5f43ad0c49c5.slice/crio-dbed280361aa75e0c2f4d5cc99c29ba81d418ca58880f2f42daccda96fbf6c74 WatchSource:0}: Error finding container dbed280361aa75e0c2f4d5cc99c29ba81d418ca58880f2f42daccda96fbf6c74: Status 404 returned error can't find the container with id dbed280361aa75e0c2f4d5cc99c29ba81d418ca58880f2f42daccda96fbf6c74 Dec 13 03:59:15 crc kubenswrapper[4766]: I1213 03:59:15.185974 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fbd9aada-00ed-49a4-94b7-d96c1014fcfe-memberlist\") pod \"speaker-vft9m\" (UID: \"fbd9aada-00ed-49a4-94b7-d96c1014fcfe\") " pod="metallb-system/speaker-vft9m" Dec 13 03:59:15 crc kubenswrapper[4766]: I1213 03:59:15.186401 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49cd51a9-7aa8-4927-b8ef-0dc5f08bbd81-metrics-certs\") pod \"controller-5bddd4b946-4pshx\" (UID: \"49cd51a9-7aa8-4927-b8ef-0dc5f08bbd81\") " pod="metallb-system/controller-5bddd4b946-4pshx" Dec 13 03:59:15 crc kubenswrapper[4766]: E1213 03:59:15.186173 4766 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 13 03:59:15 crc kubenswrapper[4766]: E1213 03:59:15.186521 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fbd9aada-00ed-49a4-94b7-d96c1014fcfe-memberlist podName:fbd9aada-00ed-49a4-94b7-d96c1014fcfe nodeName:}" failed. No retries permitted until 2025-12-13 03:59:16.186494237 +0000 UTC m=+887.696427201 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/fbd9aada-00ed-49a4-94b7-d96c1014fcfe-memberlist") pod "speaker-vft9m" (UID: "fbd9aada-00ed-49a4-94b7-d96c1014fcfe") : secret "metallb-memberlist" not found Dec 13 03:59:15 crc kubenswrapper[4766]: I1213 03:59:15.192655 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49cd51a9-7aa8-4927-b8ef-0dc5f08bbd81-metrics-certs\") pod \"controller-5bddd4b946-4pshx\" (UID: \"49cd51a9-7aa8-4927-b8ef-0dc5f08bbd81\") " pod="metallb-system/controller-5bddd4b946-4pshx" Dec 13 03:59:15 crc kubenswrapper[4766]: I1213 03:59:15.337904 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-4pshx" Dec 13 03:59:15 crc kubenswrapper[4766]: I1213 03:59:15.638507 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-4pshx"] Dec 13 03:59:15 crc kubenswrapper[4766]: W1213 03:59:15.648102 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49cd51a9_7aa8_4927_b8ef_0dc5f08bbd81.slice/crio-be461704d940adf56ae1c50bdf72fb30c81e54199e281cb1337c119119b51451 WatchSource:0}: Error finding container be461704d940adf56ae1c50bdf72fb30c81e54199e281cb1337c119119b51451: Status 404 returned error can't find the container with id be461704d940adf56ae1c50bdf72fb30c81e54199e281cb1337c119119b51451 Dec 13 03:59:16 crc kubenswrapper[4766]: I1213 03:59:16.064969 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-98sqt" event={"ID":"3873c53e-f61e-4d7e-bfe8-5f43ad0c49c5","Type":"ContainerStarted","Data":"dbed280361aa75e0c2f4d5cc99c29ba81d418ca58880f2f42daccda96fbf6c74"} Dec 13 03:59:16 crc kubenswrapper[4766]: I1213 03:59:16.066234 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f2nj6" event={"ID":"32d33161-c294-4ece-8783-25472b4ac4b7","Type":"ContainerStarted","Data":"3df44d8d75bd402c4162d99948a1edc45db85a0d411e0f34b829b08ed41b293b"} Dec 13 03:59:16 crc kubenswrapper[4766]: I1213 03:59:16.067563 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-4pshx" event={"ID":"49cd51a9-7aa8-4927-b8ef-0dc5f08bbd81","Type":"ContainerStarted","Data":"e1e45bf1d07c1d0f31ead8527defe92c1179903be743d3b2e4dd0353e2e0736a"} Dec 13 03:59:16 crc kubenswrapper[4766]: I1213 03:59:16.067598 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-4pshx" event={"ID":"49cd51a9-7aa8-4927-b8ef-0dc5f08bbd81","Type":"ContainerStarted","Data":"be461704d940adf56ae1c50bdf72fb30c81e54199e281cb1337c119119b51451"} Dec 13 03:59:16 crc kubenswrapper[4766]: I1213 03:59:16.222341 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fbd9aada-00ed-49a4-94b7-d96c1014fcfe-memberlist\") pod \"speaker-vft9m\" (UID: \"fbd9aada-00ed-49a4-94b7-d96c1014fcfe\") " pod="metallb-system/speaker-vft9m" Dec 13 03:59:16 crc kubenswrapper[4766]: I1213 03:59:16.228086 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fbd9aada-00ed-49a4-94b7-d96c1014fcfe-memberlist\") pod \"speaker-vft9m\" (UID: \"fbd9aada-00ed-49a4-94b7-d96c1014fcfe\") " pod="metallb-system/speaker-vft9m" Dec 13 03:59:16 crc kubenswrapper[4766]: I1213 03:59:16.254285 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-vft9m" Dec 13 03:59:17 crc kubenswrapper[4766]: I1213 03:59:17.081562 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-vft9m" event={"ID":"fbd9aada-00ed-49a4-94b7-d96c1014fcfe","Type":"ContainerStarted","Data":"4c0d36f0c71536a3d90f4a95fb58d4254efd9b79acfad027dcb3ca6911e6f4bd"} Dec 13 03:59:17 crc kubenswrapper[4766]: I1213 03:59:17.081942 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-vft9m" event={"ID":"fbd9aada-00ed-49a4-94b7-d96c1014fcfe","Type":"ContainerStarted","Data":"88a13db1c7e4b845810406ab3f3d1ff00ff0d8db44de7939c766fa8f43cbb9f9"} Dec 13 03:59:20 crc kubenswrapper[4766]: I1213 03:59:20.125018 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-4pshx" event={"ID":"49cd51a9-7aa8-4927-b8ef-0dc5f08bbd81","Type":"ContainerStarted","Data":"f34b464b8e59e162e9221277333dae6678d18c1030ea6378d4358ce990120c58"} Dec 13 03:59:20 crc kubenswrapper[4766]: I1213 03:59:20.125714 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5bddd4b946-4pshx" Dec 13 03:59:20 crc kubenswrapper[4766]: I1213 03:59:20.127863 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-vft9m" event={"ID":"fbd9aada-00ed-49a4-94b7-d96c1014fcfe","Type":"ContainerStarted","Data":"afdd4d4888814144a2ddd5b8542111be39f15c618e84ce2d1ffb3ed11a9da3b2"} Dec 13 03:59:20 crc kubenswrapper[4766]: I1213 03:59:20.128088 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-vft9m" Dec 13 03:59:20 crc kubenswrapper[4766]: I1213 03:59:20.149701 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-5bddd4b946-4pshx" podStartSLOduration=2.277285185 podStartE2EDuration="6.149679145s" podCreationTimestamp="2025-12-13 03:59:14 +0000 UTC" firstStartedPulling="2025-12-13 03:59:15.791675137 +0000 UTC m=+887.301608101" lastFinishedPulling="2025-12-13 03:59:19.664069097 +0000 UTC m=+891.174002061" observedRunningTime="2025-12-13 03:59:20.144623919 +0000 UTC m=+891.654556883" watchObservedRunningTime="2025-12-13 03:59:20.149679145 +0000 UTC m=+891.659612109" Dec 13 03:59:20 crc kubenswrapper[4766]: I1213 03:59:20.169103 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-vft9m" podStartSLOduration=3.026818547 podStartE2EDuration="6.169081961s" podCreationTimestamp="2025-12-13 03:59:14 +0000 UTC" firstStartedPulling="2025-12-13 03:59:16.533154129 +0000 UTC m=+888.043087093" lastFinishedPulling="2025-12-13 03:59:19.675417533 +0000 UTC m=+891.185350507" observedRunningTime="2025-12-13 03:59:20.164763208 +0000 UTC m=+891.674696172" watchObservedRunningTime="2025-12-13 03:59:20.169081961 +0000 UTC m=+891.679014925" Dec 13 03:59:24 crc kubenswrapper[4766]: I1213 03:59:24.162155 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-98sqt" event={"ID":"3873c53e-f61e-4d7e-bfe8-5f43ad0c49c5","Type":"ContainerStarted","Data":"05278bc55a1e1dfdffd5a3e203cd25708d34da706a52aec480e4ace5a7e1d4c9"} Dec 13 03:59:24 crc kubenswrapper[4766]: I1213 03:59:24.162833 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-98sqt" Dec 13 03:59:24 crc kubenswrapper[4766]: I1213 03:59:24.165919 4766 generic.go:334] "Generic (PLEG): container finished" podID="32d33161-c294-4ece-8783-25472b4ac4b7" containerID="c06ec999d056bc5abdb4f5cb41633f222178a489e3ec63e544f1151f32524d3b" exitCode=0 Dec 13 03:59:24 crc kubenswrapper[4766]: I1213 03:59:24.165965 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f2nj6" event={"ID":"32d33161-c294-4ece-8783-25472b4ac4b7","Type":"ContainerDied","Data":"c06ec999d056bc5abdb4f5cb41633f222178a489e3ec63e544f1151f32524d3b"} Dec 13 03:59:24 crc kubenswrapper[4766]: I1213 03:59:24.183068 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-98sqt" podStartSLOduration=1.404143483 podStartE2EDuration="10.183052252s" podCreationTimestamp="2025-12-13 03:59:14 +0000 UTC" firstStartedPulling="2025-12-13 03:59:15.098341756 +0000 UTC m=+886.608274720" lastFinishedPulling="2025-12-13 03:59:23.877250525 +0000 UTC m=+895.387183489" observedRunningTime="2025-12-13 03:59:24.180981623 +0000 UTC m=+895.690914587" watchObservedRunningTime="2025-12-13 03:59:24.183052252 +0000 UTC m=+895.692985216" Dec 13 03:59:25 crc kubenswrapper[4766]: I1213 03:59:25.174332 4766 generic.go:334] "Generic (PLEG): container finished" podID="32d33161-c294-4ece-8783-25472b4ac4b7" containerID="67f1395ddbfb967d1e7b86c566d1ddc6cae7a766b2ffc6aebd992859e45f7340" exitCode=0 Dec 13 03:59:25 crc kubenswrapper[4766]: I1213 03:59:25.174389 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f2nj6" event={"ID":"32d33161-c294-4ece-8783-25472b4ac4b7","Type":"ContainerDied","Data":"67f1395ddbfb967d1e7b86c566d1ddc6cae7a766b2ffc6aebd992859e45f7340"} Dec 13 03:59:25 crc kubenswrapper[4766]: I1213 03:59:25.343560 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-5bddd4b946-4pshx" Dec 13 03:59:26 crc kubenswrapper[4766]: I1213 03:59:26.182474 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f2nj6" event={"ID":"32d33161-c294-4ece-8783-25472b4ac4b7","Type":"ContainerDied","Data":"075cee61f177a06e4c1ea685035aa9acb4fce02e1f08aa86c7e19a94bd217721"} Dec 13 03:59:26 crc kubenswrapper[4766]: I1213 03:59:26.182338 4766 generic.go:334] "Generic (PLEG): container finished" podID="32d33161-c294-4ece-8783-25472b4ac4b7" containerID="075cee61f177a06e4c1ea685035aa9acb4fce02e1f08aa86c7e19a94bd217721" exitCode=0 Dec 13 03:59:26 crc kubenswrapper[4766]: I1213 03:59:26.259041 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-vft9m" Dec 13 03:59:27 crc kubenswrapper[4766]: I1213 03:59:27.194116 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f2nj6" event={"ID":"32d33161-c294-4ece-8783-25472b4ac4b7","Type":"ContainerStarted","Data":"105ac26ceeccefc4aae9349ce603e7f65cbbb8836a78e3ad90b0e09948b93c1e"} Dec 13 03:59:27 crc kubenswrapper[4766]: I1213 03:59:27.194426 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f2nj6" event={"ID":"32d33161-c294-4ece-8783-25472b4ac4b7","Type":"ContainerStarted","Data":"d2019d350a5ee3d5527e96b0747d0d002fda5eba493a5328e3d16c9f35018bf5"} Dec 13 03:59:27 crc kubenswrapper[4766]: I1213 03:59:27.195192 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f2nj6" event={"ID":"32d33161-c294-4ece-8783-25472b4ac4b7","Type":"ContainerStarted","Data":"a7dead4505e7edc7aa6749a75f201aedce9162bbf032a72d58d11befbeb7c529"} Dec 13 03:59:27 crc kubenswrapper[4766]: I1213 03:59:27.195203 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f2nj6" event={"ID":"32d33161-c294-4ece-8783-25472b4ac4b7","Type":"ContainerStarted","Data":"78b1de8d434e97d8adbbc1dbc2b738428ec64dcad82e0a580df61fc27eb5d4ba"} Dec 13 03:59:27 crc kubenswrapper[4766]: I1213 03:59:27.195213 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f2nj6" event={"ID":"32d33161-c294-4ece-8783-25472b4ac4b7","Type":"ContainerStarted","Data":"e46239893a4cb31edd85433fb64dd4e1a6dcf5fc696b958134d264fae6c5728d"} Dec 13 03:59:28 crc kubenswrapper[4766]: I1213 03:59:28.208664 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f2nj6" event={"ID":"32d33161-c294-4ece-8783-25472b4ac4b7","Type":"ContainerStarted","Data":"883860253fc43f180342281f69c96693d22555122a29fb9c83306e075ed08dbf"} Dec 13 03:59:28 crc kubenswrapper[4766]: I1213 03:59:28.209340 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-f2nj6" Dec 13 03:59:28 crc kubenswrapper[4766]: I1213 03:59:28.232430 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-f2nj6" podStartSLOduration=5.840040608 podStartE2EDuration="14.232408721s" podCreationTimestamp="2025-12-13 03:59:14 +0000 UTC" firstStartedPulling="2025-12-13 03:59:15.501834238 +0000 UTC m=+887.011767202" lastFinishedPulling="2025-12-13 03:59:23.894202351 +0000 UTC m=+895.404135315" observedRunningTime="2025-12-13 03:59:28.228347894 +0000 UTC m=+899.738280858" watchObservedRunningTime="2025-12-13 03:59:28.232408721 +0000 UTC m=+899.742341685" Dec 13 03:59:29 crc kubenswrapper[4766]: I1213 03:59:29.625377 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-f2nj6" Dec 13 03:59:29 crc kubenswrapper[4766]: I1213 03:59:29.738878 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-f2nj6" Dec 13 03:59:32 crc kubenswrapper[4766]: I1213 03:59:32.791301 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-b86mj"] Dec 13 03:59:32 crc kubenswrapper[4766]: I1213 03:59:32.792754 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-b86mj" Dec 13 03:59:32 crc kubenswrapper[4766]: I1213 03:59:32.794362 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-index-dockercfg-7rqbz" Dec 13 03:59:32 crc kubenswrapper[4766]: I1213 03:59:32.794752 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 13 03:59:32 crc kubenswrapper[4766]: I1213 03:59:32.794971 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 13 03:59:32 crc kubenswrapper[4766]: I1213 03:59:32.807580 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-b86mj"] Dec 13 03:59:32 crc kubenswrapper[4766]: I1213 03:59:32.836006 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8tw8\" (UniqueName: \"kubernetes.io/projected/37f1416f-3feb-40ac-a3d2-84c42a9f4d4b-kube-api-access-w8tw8\") pod \"mariadb-operator-index-b86mj\" (UID: \"37f1416f-3feb-40ac-a3d2-84c42a9f4d4b\") " pod="openstack-operators/mariadb-operator-index-b86mj" Dec 13 03:59:32 crc kubenswrapper[4766]: I1213 03:59:32.937510 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8tw8\" (UniqueName: \"kubernetes.io/projected/37f1416f-3feb-40ac-a3d2-84c42a9f4d4b-kube-api-access-w8tw8\") pod \"mariadb-operator-index-b86mj\" (UID: \"37f1416f-3feb-40ac-a3d2-84c42a9f4d4b\") " pod="openstack-operators/mariadb-operator-index-b86mj" Dec 13 03:59:32 crc kubenswrapper[4766]: I1213 03:59:32.959746 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8tw8\" (UniqueName: \"kubernetes.io/projected/37f1416f-3feb-40ac-a3d2-84c42a9f4d4b-kube-api-access-w8tw8\") pod \"mariadb-operator-index-b86mj\" (UID: \"37f1416f-3feb-40ac-a3d2-84c42a9f4d4b\") " pod="openstack-operators/mariadb-operator-index-b86mj" Dec 13 03:59:33 crc kubenswrapper[4766]: I1213 03:59:33.110640 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-b86mj" Dec 13 03:59:33 crc kubenswrapper[4766]: I1213 03:59:33.770080 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-b86mj"] Dec 13 03:59:34 crc kubenswrapper[4766]: I1213 03:59:34.244471 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-b86mj" event={"ID":"37f1416f-3feb-40ac-a3d2-84c42a9f4d4b","Type":"ContainerStarted","Data":"e7c1f46cbfce1248cf308c2b1c3b9ee5fba7b2e7a131f094fa905a800c0f338a"} Dec 13 03:59:34 crc kubenswrapper[4766]: I1213 03:59:34.661027 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-98sqt" Dec 13 03:59:35 crc kubenswrapper[4766]: I1213 03:59:35.967644 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-b86mj"] Dec 13 03:59:36 crc kubenswrapper[4766]: I1213 03:59:36.583064 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-hvxs7"] Dec 13 03:59:36 crc kubenswrapper[4766]: I1213 03:59:36.584195 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-hvxs7" Dec 13 03:59:36 crc kubenswrapper[4766]: I1213 03:59:36.587978 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-hvxs7"] Dec 13 03:59:36 crc kubenswrapper[4766]: I1213 03:59:36.764836 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzckm\" (UniqueName: \"kubernetes.io/projected/ab67c713-8fff-41ec-bd15-da3ea9d58068-kube-api-access-hzckm\") pod \"mariadb-operator-index-hvxs7\" (UID: \"ab67c713-8fff-41ec-bd15-da3ea9d58068\") " pod="openstack-operators/mariadb-operator-index-hvxs7" Dec 13 03:59:36 crc kubenswrapper[4766]: I1213 03:59:36.866563 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzckm\" (UniqueName: \"kubernetes.io/projected/ab67c713-8fff-41ec-bd15-da3ea9d58068-kube-api-access-hzckm\") pod \"mariadb-operator-index-hvxs7\" (UID: \"ab67c713-8fff-41ec-bd15-da3ea9d58068\") " pod="openstack-operators/mariadb-operator-index-hvxs7" Dec 13 03:59:36 crc kubenswrapper[4766]: I1213 03:59:36.904763 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzckm\" (UniqueName: \"kubernetes.io/projected/ab67c713-8fff-41ec-bd15-da3ea9d58068-kube-api-access-hzckm\") pod \"mariadb-operator-index-hvxs7\" (UID: \"ab67c713-8fff-41ec-bd15-da3ea9d58068\") " pod="openstack-operators/mariadb-operator-index-hvxs7" Dec 13 03:59:36 crc kubenswrapper[4766]: I1213 03:59:36.923083 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-hvxs7" Dec 13 03:59:44 crc kubenswrapper[4766]: I1213 03:59:44.652567 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-f2nj6" Dec 13 03:59:51 crc kubenswrapper[4766]: I1213 03:59:51.669211 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-hvxs7"] Dec 13 03:59:51 crc kubenswrapper[4766]: I1213 03:59:51.694674 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-hvxs7" event={"ID":"ab67c713-8fff-41ec-bd15-da3ea9d58068","Type":"ContainerStarted","Data":"271f807e7e9221aaf93435be0f43ceca3f8824784ef71ceb3d54eb990b83dd9c"} Dec 13 03:59:51 crc kubenswrapper[4766]: I1213 03:59:51.698007 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-b86mj" event={"ID":"37f1416f-3feb-40ac-a3d2-84c42a9f4d4b","Type":"ContainerStarted","Data":"c0aa2656a8e1974a7103c9552a90c9147680e462386fd6ac186f3680ac4d2d1c"} Dec 13 03:59:51 crc kubenswrapper[4766]: I1213 03:59:51.698156 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-index-b86mj" podUID="37f1416f-3feb-40ac-a3d2-84c42a9f4d4b" containerName="registry-server" containerID="cri-o://c0aa2656a8e1974a7103c9552a90c9147680e462386fd6ac186f3680ac4d2d1c" gracePeriod=2 Dec 13 03:59:51 crc kubenswrapper[4766]: I1213 03:59:51.717874 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-b86mj" podStartSLOduration=2.24114293 podStartE2EDuration="19.717842702s" podCreationTimestamp="2025-12-13 03:59:32 +0000 UTC" firstStartedPulling="2025-12-13 03:59:33.783216626 +0000 UTC m=+905.293149590" lastFinishedPulling="2025-12-13 03:59:51.259916398 +0000 UTC m=+922.769849362" observedRunningTime="2025-12-13 03:59:51.712949292 +0000 UTC m=+923.222882256" watchObservedRunningTime="2025-12-13 03:59:51.717842702 +0000 UTC m=+923.227775666" Dec 13 03:59:52 crc kubenswrapper[4766]: I1213 03:59:52.017213 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-b86mj" Dec 13 03:59:52 crc kubenswrapper[4766]: I1213 03:59:52.039825 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8tw8\" (UniqueName: \"kubernetes.io/projected/37f1416f-3feb-40ac-a3d2-84c42a9f4d4b-kube-api-access-w8tw8\") pod \"37f1416f-3feb-40ac-a3d2-84c42a9f4d4b\" (UID: \"37f1416f-3feb-40ac-a3d2-84c42a9f4d4b\") " Dec 13 03:59:52 crc kubenswrapper[4766]: I1213 03:59:52.046067 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37f1416f-3feb-40ac-a3d2-84c42a9f4d4b-kube-api-access-w8tw8" (OuterVolumeSpecName: "kube-api-access-w8tw8") pod "37f1416f-3feb-40ac-a3d2-84c42a9f4d4b" (UID: "37f1416f-3feb-40ac-a3d2-84c42a9f4d4b"). InnerVolumeSpecName "kube-api-access-w8tw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:59:52 crc kubenswrapper[4766]: I1213 03:59:52.141302 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8tw8\" (UniqueName: \"kubernetes.io/projected/37f1416f-3feb-40ac-a3d2-84c42a9f4d4b-kube-api-access-w8tw8\") on node \"crc\" DevicePath \"\"" Dec 13 03:59:52 crc kubenswrapper[4766]: I1213 03:59:52.713064 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-hvxs7" event={"ID":"ab67c713-8fff-41ec-bd15-da3ea9d58068","Type":"ContainerStarted","Data":"0b0e6820e7db3aa4e148e0fd9edb798244033a2a5dfbfb5ce4f0b168704ab6d7"} Dec 13 03:59:52 crc kubenswrapper[4766]: I1213 03:59:52.714789 4766 generic.go:334] "Generic (PLEG): container finished" podID="37f1416f-3feb-40ac-a3d2-84c42a9f4d4b" containerID="c0aa2656a8e1974a7103c9552a90c9147680e462386fd6ac186f3680ac4d2d1c" exitCode=0 Dec 13 03:59:52 crc kubenswrapper[4766]: I1213 03:59:52.714816 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-b86mj" event={"ID":"37f1416f-3feb-40ac-a3d2-84c42a9f4d4b","Type":"ContainerDied","Data":"c0aa2656a8e1974a7103c9552a90c9147680e462386fd6ac186f3680ac4d2d1c"} Dec 13 03:59:52 crc kubenswrapper[4766]: I1213 03:59:52.714835 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-b86mj" event={"ID":"37f1416f-3feb-40ac-a3d2-84c42a9f4d4b","Type":"ContainerDied","Data":"e7c1f46cbfce1248cf308c2b1c3b9ee5fba7b2e7a131f094fa905a800c0f338a"} Dec 13 03:59:52 crc kubenswrapper[4766]: I1213 03:59:52.714814 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-b86mj" Dec 13 03:59:52 crc kubenswrapper[4766]: I1213 03:59:52.714897 4766 scope.go:117] "RemoveContainer" containerID="c0aa2656a8e1974a7103c9552a90c9147680e462386fd6ac186f3680ac4d2d1c" Dec 13 03:59:52 crc kubenswrapper[4766]: I1213 03:59:52.734332 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-hvxs7" podStartSLOduration=16.206164918 podStartE2EDuration="16.734311688s" podCreationTimestamp="2025-12-13 03:59:36 +0000 UTC" firstStartedPulling="2025-12-13 03:59:51.688882101 +0000 UTC m=+923.198815065" lastFinishedPulling="2025-12-13 03:59:52.217028881 +0000 UTC m=+923.726961835" observedRunningTime="2025-12-13 03:59:52.732875286 +0000 UTC m=+924.242808250" watchObservedRunningTime="2025-12-13 03:59:52.734311688 +0000 UTC m=+924.244244682" Dec 13 03:59:52 crc kubenswrapper[4766]: I1213 03:59:52.743197 4766 scope.go:117] "RemoveContainer" containerID="c0aa2656a8e1974a7103c9552a90c9147680e462386fd6ac186f3680ac4d2d1c" Dec 13 03:59:52 crc kubenswrapper[4766]: E1213 03:59:52.743940 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0aa2656a8e1974a7103c9552a90c9147680e462386fd6ac186f3680ac4d2d1c\": container with ID starting with c0aa2656a8e1974a7103c9552a90c9147680e462386fd6ac186f3680ac4d2d1c not found: ID does not exist" containerID="c0aa2656a8e1974a7103c9552a90c9147680e462386fd6ac186f3680ac4d2d1c" Dec 13 03:59:52 crc kubenswrapper[4766]: I1213 03:59:52.743988 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0aa2656a8e1974a7103c9552a90c9147680e462386fd6ac186f3680ac4d2d1c"} err="failed to get container status \"c0aa2656a8e1974a7103c9552a90c9147680e462386fd6ac186f3680ac4d2d1c\": rpc error: code = NotFound desc = could not find container \"c0aa2656a8e1974a7103c9552a90c9147680e462386fd6ac186f3680ac4d2d1c\": container with ID starting with c0aa2656a8e1974a7103c9552a90c9147680e462386fd6ac186f3680ac4d2d1c not found: ID does not exist" Dec 13 03:59:52 crc kubenswrapper[4766]: I1213 03:59:52.760555 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-b86mj"] Dec 13 03:59:52 crc kubenswrapper[4766]: I1213 03:59:52.774665 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-index-b86mj"] Dec 13 03:59:52 crc kubenswrapper[4766]: E1213 03:59:52.800961 4766 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37f1416f_3feb_40ac_a3d2_84c42a9f4d4b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37f1416f_3feb_40ac_a3d2_84c42a9f4d4b.slice/crio-e7c1f46cbfce1248cf308c2b1c3b9ee5fba7b2e7a131f094fa905a800c0f338a\": RecentStats: unable to find data in memory cache]" Dec 13 03:59:53 crc kubenswrapper[4766]: I1213 03:59:53.624047 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37f1416f-3feb-40ac-a3d2-84c42a9f4d4b" path="/var/lib/kubelet/pods/37f1416f-3feb-40ac-a3d2-84c42a9f4d4b/volumes" Dec 13 03:59:56 crc kubenswrapper[4766]: I1213 03:59:56.963954 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-index-hvxs7" Dec 13 03:59:56 crc kubenswrapper[4766]: I1213 03:59:56.964756 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/mariadb-operator-index-hvxs7" Dec 13 03:59:56 crc kubenswrapper[4766]: I1213 03:59:56.997258 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/mariadb-operator-index-hvxs7" Dec 13 03:59:57 crc kubenswrapper[4766]: I1213 03:59:57.772930 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-index-hvxs7" Dec 13 04:00:00 crc kubenswrapper[4766]: I1213 04:00:00.156367 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29426640-dhxpq"] Dec 13 04:00:00 crc kubenswrapper[4766]: E1213 04:00:00.157092 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37f1416f-3feb-40ac-a3d2-84c42a9f4d4b" containerName="registry-server" Dec 13 04:00:00 crc kubenswrapper[4766]: I1213 04:00:00.157118 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="37f1416f-3feb-40ac-a3d2-84c42a9f4d4b" containerName="registry-server" Dec 13 04:00:00 crc kubenswrapper[4766]: I1213 04:00:00.157254 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="37f1416f-3feb-40ac-a3d2-84c42a9f4d4b" containerName="registry-server" Dec 13 04:00:00 crc kubenswrapper[4766]: I1213 04:00:00.157892 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426640-dhxpq" Dec 13 04:00:00 crc kubenswrapper[4766]: I1213 04:00:00.160565 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 13 04:00:00 crc kubenswrapper[4766]: I1213 04:00:00.160911 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 13 04:00:00 crc kubenswrapper[4766]: I1213 04:00:00.166265 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29426640-dhxpq"] Dec 13 04:00:00 crc kubenswrapper[4766]: I1213 04:00:00.351069 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4cgw\" (UniqueName: \"kubernetes.io/projected/6cae440f-a456-44e4-ab81-71c6d9aaf25e-kube-api-access-z4cgw\") pod \"collect-profiles-29426640-dhxpq\" (UID: \"6cae440f-a456-44e4-ab81-71c6d9aaf25e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426640-dhxpq" Dec 13 04:00:00 crc kubenswrapper[4766]: I1213 04:00:00.351155 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6cae440f-a456-44e4-ab81-71c6d9aaf25e-config-volume\") pod \"collect-profiles-29426640-dhxpq\" (UID: \"6cae440f-a456-44e4-ab81-71c6d9aaf25e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426640-dhxpq" Dec 13 04:00:00 crc kubenswrapper[4766]: I1213 04:00:00.351203 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6cae440f-a456-44e4-ab81-71c6d9aaf25e-secret-volume\") pod \"collect-profiles-29426640-dhxpq\" (UID: \"6cae440f-a456-44e4-ab81-71c6d9aaf25e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426640-dhxpq" Dec 13 04:00:00 crc kubenswrapper[4766]: I1213 04:00:00.452871 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6cae440f-a456-44e4-ab81-71c6d9aaf25e-secret-volume\") pod \"collect-profiles-29426640-dhxpq\" (UID: \"6cae440f-a456-44e4-ab81-71c6d9aaf25e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426640-dhxpq" Dec 13 04:00:00 crc kubenswrapper[4766]: I1213 04:00:00.453000 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4cgw\" (UniqueName: \"kubernetes.io/projected/6cae440f-a456-44e4-ab81-71c6d9aaf25e-kube-api-access-z4cgw\") pod \"collect-profiles-29426640-dhxpq\" (UID: \"6cae440f-a456-44e4-ab81-71c6d9aaf25e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426640-dhxpq" Dec 13 04:00:00 crc kubenswrapper[4766]: I1213 04:00:00.453057 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6cae440f-a456-44e4-ab81-71c6d9aaf25e-config-volume\") pod \"collect-profiles-29426640-dhxpq\" (UID: \"6cae440f-a456-44e4-ab81-71c6d9aaf25e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426640-dhxpq" Dec 13 04:00:00 crc kubenswrapper[4766]: I1213 04:00:00.454579 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6cae440f-a456-44e4-ab81-71c6d9aaf25e-config-volume\") pod \"collect-profiles-29426640-dhxpq\" (UID: \"6cae440f-a456-44e4-ab81-71c6d9aaf25e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426640-dhxpq" Dec 13 04:00:00 crc kubenswrapper[4766]: I1213 04:00:00.461001 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6cae440f-a456-44e4-ab81-71c6d9aaf25e-secret-volume\") pod \"collect-profiles-29426640-dhxpq\" (UID: \"6cae440f-a456-44e4-ab81-71c6d9aaf25e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426640-dhxpq" Dec 13 04:00:00 crc kubenswrapper[4766]: I1213 04:00:00.471864 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4cgw\" (UniqueName: \"kubernetes.io/projected/6cae440f-a456-44e4-ab81-71c6d9aaf25e-kube-api-access-z4cgw\") pod \"collect-profiles-29426640-dhxpq\" (UID: \"6cae440f-a456-44e4-ab81-71c6d9aaf25e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426640-dhxpq" Dec 13 04:00:00 crc kubenswrapper[4766]: I1213 04:00:00.482486 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426640-dhxpq" Dec 13 04:00:00 crc kubenswrapper[4766]: I1213 04:00:00.676758 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29426640-dhxpq"] Dec 13 04:00:00 crc kubenswrapper[4766]: I1213 04:00:00.759558 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29426640-dhxpq" event={"ID":"6cae440f-a456-44e4-ab81-71c6d9aaf25e","Type":"ContainerStarted","Data":"8210a76d5abafc49d8a503e2478117eed8e1e0ac0a7b0f77a62876dc467e358f"} Dec 13 04:00:01 crc kubenswrapper[4766]: I1213 04:00:01.766364 4766 generic.go:334] "Generic (PLEG): container finished" podID="6cae440f-a456-44e4-ab81-71c6d9aaf25e" containerID="59b8717bd0c5e85a524aae38a5fb4e9c51c9dc63ea0a5a65c70c22538b72aacd" exitCode=0 Dec 13 04:00:01 crc kubenswrapper[4766]: I1213 04:00:01.766444 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29426640-dhxpq" event={"ID":"6cae440f-a456-44e4-ab81-71c6d9aaf25e","Type":"ContainerDied","Data":"59b8717bd0c5e85a524aae38a5fb4e9c51c9dc63ea0a5a65c70c22538b72aacd"} Dec 13 04:00:02 crc kubenswrapper[4766]: I1213 04:00:02.837147 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82"] Dec 13 04:00:02 crc kubenswrapper[4766]: I1213 04:00:02.838703 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82" Dec 13 04:00:02 crc kubenswrapper[4766]: I1213 04:00:02.841963 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-vg794" Dec 13 04:00:02 crc kubenswrapper[4766]: I1213 04:00:02.853204 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82"] Dec 13 04:00:02 crc kubenswrapper[4766]: I1213 04:00:02.891693 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-866f4\" (UniqueName: \"kubernetes.io/projected/bae6116c-1f11-4b23-826c-90264615b3ca-kube-api-access-866f4\") pod \"798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82\" (UID: \"bae6116c-1f11-4b23-826c-90264615b3ca\") " pod="openstack-operators/798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82" Dec 13 04:00:02 crc kubenswrapper[4766]: I1213 04:00:02.892031 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bae6116c-1f11-4b23-826c-90264615b3ca-util\") pod \"798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82\" (UID: \"bae6116c-1f11-4b23-826c-90264615b3ca\") " pod="openstack-operators/798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82" Dec 13 04:00:02 crc kubenswrapper[4766]: I1213 04:00:02.892151 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bae6116c-1f11-4b23-826c-90264615b3ca-bundle\") pod \"798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82\" (UID: \"bae6116c-1f11-4b23-826c-90264615b3ca\") " pod="openstack-operators/798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82" Dec 13 04:00:02 crc kubenswrapper[4766]: I1213 04:00:02.993309 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-866f4\" (UniqueName: \"kubernetes.io/projected/bae6116c-1f11-4b23-826c-90264615b3ca-kube-api-access-866f4\") pod \"798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82\" (UID: \"bae6116c-1f11-4b23-826c-90264615b3ca\") " pod="openstack-operators/798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82" Dec 13 04:00:02 crc kubenswrapper[4766]: I1213 04:00:02.993393 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bae6116c-1f11-4b23-826c-90264615b3ca-util\") pod \"798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82\" (UID: \"bae6116c-1f11-4b23-826c-90264615b3ca\") " pod="openstack-operators/798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82" Dec 13 04:00:02 crc kubenswrapper[4766]: I1213 04:00:02.993455 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bae6116c-1f11-4b23-826c-90264615b3ca-bundle\") pod \"798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82\" (UID: \"bae6116c-1f11-4b23-826c-90264615b3ca\") " pod="openstack-operators/798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82" Dec 13 04:00:02 crc kubenswrapper[4766]: I1213 04:00:02.993980 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bae6116c-1f11-4b23-826c-90264615b3ca-bundle\") pod \"798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82\" (UID: \"bae6116c-1f11-4b23-826c-90264615b3ca\") " pod="openstack-operators/798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82" Dec 13 04:00:02 crc kubenswrapper[4766]: I1213 04:00:02.997011 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bae6116c-1f11-4b23-826c-90264615b3ca-util\") pod \"798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82\" (UID: \"bae6116c-1f11-4b23-826c-90264615b3ca\") " pod="openstack-operators/798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82" Dec 13 04:00:03 crc kubenswrapper[4766]: I1213 04:00:03.017830 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-866f4\" (UniqueName: \"kubernetes.io/projected/bae6116c-1f11-4b23-826c-90264615b3ca-kube-api-access-866f4\") pod \"798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82\" (UID: \"bae6116c-1f11-4b23-826c-90264615b3ca\") " pod="openstack-operators/798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82" Dec 13 04:00:03 crc kubenswrapper[4766]: I1213 04:00:03.053497 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426640-dhxpq" Dec 13 04:00:03 crc kubenswrapper[4766]: I1213 04:00:03.162478 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82" Dec 13 04:00:03 crc kubenswrapper[4766]: I1213 04:00:03.195216 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6cae440f-a456-44e4-ab81-71c6d9aaf25e-secret-volume\") pod \"6cae440f-a456-44e4-ab81-71c6d9aaf25e\" (UID: \"6cae440f-a456-44e4-ab81-71c6d9aaf25e\") " Dec 13 04:00:03 crc kubenswrapper[4766]: I1213 04:00:03.195388 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6cae440f-a456-44e4-ab81-71c6d9aaf25e-config-volume\") pod \"6cae440f-a456-44e4-ab81-71c6d9aaf25e\" (UID: \"6cae440f-a456-44e4-ab81-71c6d9aaf25e\") " Dec 13 04:00:03 crc kubenswrapper[4766]: I1213 04:00:03.195487 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4cgw\" (UniqueName: \"kubernetes.io/projected/6cae440f-a456-44e4-ab81-71c6d9aaf25e-kube-api-access-z4cgw\") pod \"6cae440f-a456-44e4-ab81-71c6d9aaf25e\" (UID: \"6cae440f-a456-44e4-ab81-71c6d9aaf25e\") " Dec 13 04:00:03 crc kubenswrapper[4766]: I1213 04:00:03.196326 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cae440f-a456-44e4-ab81-71c6d9aaf25e-config-volume" (OuterVolumeSpecName: "config-volume") pod "6cae440f-a456-44e4-ab81-71c6d9aaf25e" (UID: "6cae440f-a456-44e4-ab81-71c6d9aaf25e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 04:00:03 crc kubenswrapper[4766]: I1213 04:00:03.198607 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cae440f-a456-44e4-ab81-71c6d9aaf25e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6cae440f-a456-44e4-ab81-71c6d9aaf25e" (UID: "6cae440f-a456-44e4-ab81-71c6d9aaf25e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:00:03 crc kubenswrapper[4766]: I1213 04:00:03.199958 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cae440f-a456-44e4-ab81-71c6d9aaf25e-kube-api-access-z4cgw" (OuterVolumeSpecName: "kube-api-access-z4cgw") pod "6cae440f-a456-44e4-ab81-71c6d9aaf25e" (UID: "6cae440f-a456-44e4-ab81-71c6d9aaf25e"). InnerVolumeSpecName "kube-api-access-z4cgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:00:03 crc kubenswrapper[4766]: I1213 04:00:03.297500 4766 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6cae440f-a456-44e4-ab81-71c6d9aaf25e-config-volume\") on node \"crc\" DevicePath \"\"" Dec 13 04:00:03 crc kubenswrapper[4766]: I1213 04:00:03.297570 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4cgw\" (UniqueName: \"kubernetes.io/projected/6cae440f-a456-44e4-ab81-71c6d9aaf25e-kube-api-access-z4cgw\") on node \"crc\" DevicePath \"\"" Dec 13 04:00:03 crc kubenswrapper[4766]: I1213 04:00:03.297586 4766 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6cae440f-a456-44e4-ab81-71c6d9aaf25e-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 13 04:00:03 crc kubenswrapper[4766]: I1213 04:00:03.487792 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82"] Dec 13 04:00:03 crc kubenswrapper[4766]: W1213 04:00:03.492360 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbae6116c_1f11_4b23_826c_90264615b3ca.slice/crio-90ed13b2b107a88cac3dc82ff30f9e3f8f1c92d819012290321f26fb21fe3887 WatchSource:0}: Error finding container 90ed13b2b107a88cac3dc82ff30f9e3f8f1c92d819012290321f26fb21fe3887: Status 404 returned error can't find the container with id 90ed13b2b107a88cac3dc82ff30f9e3f8f1c92d819012290321f26fb21fe3887 Dec 13 04:00:03 crc kubenswrapper[4766]: I1213 04:00:03.783575 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29426640-dhxpq" event={"ID":"6cae440f-a456-44e4-ab81-71c6d9aaf25e","Type":"ContainerDied","Data":"8210a76d5abafc49d8a503e2478117eed8e1e0ac0a7b0f77a62876dc467e358f"} Dec 13 04:00:03 crc kubenswrapper[4766]: I1213 04:00:03.783620 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426640-dhxpq" Dec 13 04:00:03 crc kubenswrapper[4766]: I1213 04:00:03.783624 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8210a76d5abafc49d8a503e2478117eed8e1e0ac0a7b0f77a62876dc467e358f" Dec 13 04:00:03 crc kubenswrapper[4766]: I1213 04:00:03.785310 4766 generic.go:334] "Generic (PLEG): container finished" podID="bae6116c-1f11-4b23-826c-90264615b3ca" containerID="bb30d4f69f7a8bab705e1dc29f6118fe72b9ad45ec6c4bf8eef1e366033426f7" exitCode=0 Dec 13 04:00:03 crc kubenswrapper[4766]: I1213 04:00:03.785366 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82" event={"ID":"bae6116c-1f11-4b23-826c-90264615b3ca","Type":"ContainerDied","Data":"bb30d4f69f7a8bab705e1dc29f6118fe72b9ad45ec6c4bf8eef1e366033426f7"} Dec 13 04:00:03 crc kubenswrapper[4766]: I1213 04:00:03.785398 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82" event={"ID":"bae6116c-1f11-4b23-826c-90264615b3ca","Type":"ContainerStarted","Data":"90ed13b2b107a88cac3dc82ff30f9e3f8f1c92d819012290321f26fb21fe3887"} Dec 13 04:00:05 crc kubenswrapper[4766]: I1213 04:00:05.803081 4766 generic.go:334] "Generic (PLEG): container finished" podID="bae6116c-1f11-4b23-826c-90264615b3ca" containerID="20d4ae30f3fa3756e8272d91037cf91bb6ca7020e663178314ea7f30b8e7de75" exitCode=0 Dec 13 04:00:05 crc kubenswrapper[4766]: I1213 04:00:05.803198 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82" event={"ID":"bae6116c-1f11-4b23-826c-90264615b3ca","Type":"ContainerDied","Data":"20d4ae30f3fa3756e8272d91037cf91bb6ca7020e663178314ea7f30b8e7de75"} Dec 13 04:00:06 crc kubenswrapper[4766]: I1213 04:00:06.811218 4766 generic.go:334] "Generic (PLEG): container finished" podID="bae6116c-1f11-4b23-826c-90264615b3ca" containerID="db989c8e719b514bf6a7fa9b59f161d10299d7dda9a0a6ffb2011b390a13c4fd" exitCode=0 Dec 13 04:00:06 crc kubenswrapper[4766]: I1213 04:00:06.811270 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82" event={"ID":"bae6116c-1f11-4b23-826c-90264615b3ca","Type":"ContainerDied","Data":"db989c8e719b514bf6a7fa9b59f161d10299d7dda9a0a6ffb2011b390a13c4fd"} Dec 13 04:00:08 crc kubenswrapper[4766]: I1213 04:00:08.064880 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82" Dec 13 04:00:08 crc kubenswrapper[4766]: I1213 04:00:08.169127 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bae6116c-1f11-4b23-826c-90264615b3ca-util\") pod \"bae6116c-1f11-4b23-826c-90264615b3ca\" (UID: \"bae6116c-1f11-4b23-826c-90264615b3ca\") " Dec 13 04:00:08 crc kubenswrapper[4766]: I1213 04:00:08.169227 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-866f4\" (UniqueName: \"kubernetes.io/projected/bae6116c-1f11-4b23-826c-90264615b3ca-kube-api-access-866f4\") pod \"bae6116c-1f11-4b23-826c-90264615b3ca\" (UID: \"bae6116c-1f11-4b23-826c-90264615b3ca\") " Dec 13 04:00:08 crc kubenswrapper[4766]: I1213 04:00:08.169268 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bae6116c-1f11-4b23-826c-90264615b3ca-bundle\") pod \"bae6116c-1f11-4b23-826c-90264615b3ca\" (UID: \"bae6116c-1f11-4b23-826c-90264615b3ca\") " Dec 13 04:00:08 crc kubenswrapper[4766]: I1213 04:00:08.170319 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bae6116c-1f11-4b23-826c-90264615b3ca-bundle" (OuterVolumeSpecName: "bundle") pod "bae6116c-1f11-4b23-826c-90264615b3ca" (UID: "bae6116c-1f11-4b23-826c-90264615b3ca"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:00:08 crc kubenswrapper[4766]: I1213 04:00:08.175588 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bae6116c-1f11-4b23-826c-90264615b3ca-kube-api-access-866f4" (OuterVolumeSpecName: "kube-api-access-866f4") pod "bae6116c-1f11-4b23-826c-90264615b3ca" (UID: "bae6116c-1f11-4b23-826c-90264615b3ca"). InnerVolumeSpecName "kube-api-access-866f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:00:08 crc kubenswrapper[4766]: I1213 04:00:08.185808 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bae6116c-1f11-4b23-826c-90264615b3ca-util" (OuterVolumeSpecName: "util") pod "bae6116c-1f11-4b23-826c-90264615b3ca" (UID: "bae6116c-1f11-4b23-826c-90264615b3ca"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:00:08 crc kubenswrapper[4766]: I1213 04:00:08.270889 4766 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bae6116c-1f11-4b23-826c-90264615b3ca-util\") on node \"crc\" DevicePath \"\"" Dec 13 04:00:08 crc kubenswrapper[4766]: I1213 04:00:08.270940 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-866f4\" (UniqueName: \"kubernetes.io/projected/bae6116c-1f11-4b23-826c-90264615b3ca-kube-api-access-866f4\") on node \"crc\" DevicePath \"\"" Dec 13 04:00:08 crc kubenswrapper[4766]: I1213 04:00:08.270953 4766 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bae6116c-1f11-4b23-826c-90264615b3ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 04:00:08 crc kubenswrapper[4766]: I1213 04:00:08.823494 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82" event={"ID":"bae6116c-1f11-4b23-826c-90264615b3ca","Type":"ContainerDied","Data":"90ed13b2b107a88cac3dc82ff30f9e3f8f1c92d819012290321f26fb21fe3887"} Dec 13 04:00:08 crc kubenswrapper[4766]: I1213 04:00:08.823538 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90ed13b2b107a88cac3dc82ff30f9e3f8f1c92d819012290321f26fb21fe3887" Dec 13 04:00:08 crc kubenswrapper[4766]: I1213 04:00:08.823625 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82" Dec 13 04:00:16 crc kubenswrapper[4766]: I1213 04:00:16.336826 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c86c5fb9b-hp997"] Dec 13 04:00:16 crc kubenswrapper[4766]: E1213 04:00:16.337783 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bae6116c-1f11-4b23-826c-90264615b3ca" containerName="extract" Dec 13 04:00:16 crc kubenswrapper[4766]: I1213 04:00:16.337810 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="bae6116c-1f11-4b23-826c-90264615b3ca" containerName="extract" Dec 13 04:00:16 crc kubenswrapper[4766]: E1213 04:00:16.337833 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bae6116c-1f11-4b23-826c-90264615b3ca" containerName="util" Dec 13 04:00:16 crc kubenswrapper[4766]: I1213 04:00:16.337841 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="bae6116c-1f11-4b23-826c-90264615b3ca" containerName="util" Dec 13 04:00:16 crc kubenswrapper[4766]: E1213 04:00:16.337856 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bae6116c-1f11-4b23-826c-90264615b3ca" containerName="pull" Dec 13 04:00:16 crc kubenswrapper[4766]: I1213 04:00:16.337864 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="bae6116c-1f11-4b23-826c-90264615b3ca" containerName="pull" Dec 13 04:00:16 crc kubenswrapper[4766]: E1213 04:00:16.337875 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cae440f-a456-44e4-ab81-71c6d9aaf25e" containerName="collect-profiles" Dec 13 04:00:16 crc kubenswrapper[4766]: I1213 04:00:16.337885 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cae440f-a456-44e4-ab81-71c6d9aaf25e" containerName="collect-profiles" Dec 13 04:00:16 crc kubenswrapper[4766]: I1213 04:00:16.338133 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cae440f-a456-44e4-ab81-71c6d9aaf25e" containerName="collect-profiles" Dec 13 04:00:16 crc kubenswrapper[4766]: I1213 04:00:16.338149 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="bae6116c-1f11-4b23-826c-90264615b3ca" containerName="extract" Dec 13 04:00:16 crc kubenswrapper[4766]: I1213 04:00:16.338961 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c86c5fb9b-hp997" Dec 13 04:00:16 crc kubenswrapper[4766]: I1213 04:00:16.343060 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 13 04:00:16 crc kubenswrapper[4766]: I1213 04:00:16.343627 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-ck8zr" Dec 13 04:00:16 crc kubenswrapper[4766]: I1213 04:00:16.343766 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-service-cert" Dec 13 04:00:16 crc kubenswrapper[4766]: I1213 04:00:16.363565 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c86c5fb9b-hp997"] Dec 13 04:00:16 crc kubenswrapper[4766]: I1213 04:00:16.592158 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/da4aa2bc-b65f-4188-8b6c-f89f57f8abec-apiservice-cert\") pod \"mariadb-operator-controller-manager-c86c5fb9b-hp997\" (UID: \"da4aa2bc-b65f-4188-8b6c-f89f57f8abec\") " pod="openstack-operators/mariadb-operator-controller-manager-c86c5fb9b-hp997" Dec 13 04:00:16 crc kubenswrapper[4766]: I1213 04:00:16.592355 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8kfq\" (UniqueName: \"kubernetes.io/projected/da4aa2bc-b65f-4188-8b6c-f89f57f8abec-kube-api-access-l8kfq\") pod \"mariadb-operator-controller-manager-c86c5fb9b-hp997\" (UID: \"da4aa2bc-b65f-4188-8b6c-f89f57f8abec\") " pod="openstack-operators/mariadb-operator-controller-manager-c86c5fb9b-hp997" Dec 13 04:00:16 crc kubenswrapper[4766]: I1213 04:00:16.592386 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da4aa2bc-b65f-4188-8b6c-f89f57f8abec-webhook-cert\") pod \"mariadb-operator-controller-manager-c86c5fb9b-hp997\" (UID: \"da4aa2bc-b65f-4188-8b6c-f89f57f8abec\") " pod="openstack-operators/mariadb-operator-controller-manager-c86c5fb9b-hp997" Dec 13 04:00:16 crc kubenswrapper[4766]: I1213 04:00:16.694423 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8kfq\" (UniqueName: \"kubernetes.io/projected/da4aa2bc-b65f-4188-8b6c-f89f57f8abec-kube-api-access-l8kfq\") pod \"mariadb-operator-controller-manager-c86c5fb9b-hp997\" (UID: \"da4aa2bc-b65f-4188-8b6c-f89f57f8abec\") " pod="openstack-operators/mariadb-operator-controller-manager-c86c5fb9b-hp997" Dec 13 04:00:16 crc kubenswrapper[4766]: I1213 04:00:16.694489 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da4aa2bc-b65f-4188-8b6c-f89f57f8abec-webhook-cert\") pod \"mariadb-operator-controller-manager-c86c5fb9b-hp997\" (UID: \"da4aa2bc-b65f-4188-8b6c-f89f57f8abec\") " pod="openstack-operators/mariadb-operator-controller-manager-c86c5fb9b-hp997" Dec 13 04:00:16 crc kubenswrapper[4766]: I1213 04:00:16.694635 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/da4aa2bc-b65f-4188-8b6c-f89f57f8abec-apiservice-cert\") pod \"mariadb-operator-controller-manager-c86c5fb9b-hp997\" (UID: \"da4aa2bc-b65f-4188-8b6c-f89f57f8abec\") " pod="openstack-operators/mariadb-operator-controller-manager-c86c5fb9b-hp997" Dec 13 04:00:16 crc kubenswrapper[4766]: I1213 04:00:16.703157 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da4aa2bc-b65f-4188-8b6c-f89f57f8abec-webhook-cert\") pod \"mariadb-operator-controller-manager-c86c5fb9b-hp997\" (UID: \"da4aa2bc-b65f-4188-8b6c-f89f57f8abec\") " pod="openstack-operators/mariadb-operator-controller-manager-c86c5fb9b-hp997" Dec 13 04:00:16 crc kubenswrapper[4766]: I1213 04:00:16.703174 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/da4aa2bc-b65f-4188-8b6c-f89f57f8abec-apiservice-cert\") pod \"mariadb-operator-controller-manager-c86c5fb9b-hp997\" (UID: \"da4aa2bc-b65f-4188-8b6c-f89f57f8abec\") " pod="openstack-operators/mariadb-operator-controller-manager-c86c5fb9b-hp997" Dec 13 04:00:16 crc kubenswrapper[4766]: I1213 04:00:16.712650 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8kfq\" (UniqueName: \"kubernetes.io/projected/da4aa2bc-b65f-4188-8b6c-f89f57f8abec-kube-api-access-l8kfq\") pod \"mariadb-operator-controller-manager-c86c5fb9b-hp997\" (UID: \"da4aa2bc-b65f-4188-8b6c-f89f57f8abec\") " pod="openstack-operators/mariadb-operator-controller-manager-c86c5fb9b-hp997" Dec 13 04:00:16 crc kubenswrapper[4766]: I1213 04:00:16.961107 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c86c5fb9b-hp997" Dec 13 04:00:17 crc kubenswrapper[4766]: I1213 04:00:17.208092 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c86c5fb9b-hp997"] Dec 13 04:00:17 crc kubenswrapper[4766]: I1213 04:00:17.879950 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c86c5fb9b-hp997" event={"ID":"da4aa2bc-b65f-4188-8b6c-f89f57f8abec","Type":"ContainerStarted","Data":"d0b956f636e61ef2584fa7523e2b1925b2efb7f59ffd6a7f6b431e2192db5620"} Dec 13 04:00:25 crc kubenswrapper[4766]: I1213 04:00:25.962796 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c86c5fb9b-hp997" event={"ID":"da4aa2bc-b65f-4188-8b6c-f89f57f8abec","Type":"ContainerStarted","Data":"788730155a9a756d9273c064cba06d6443c374095034fffe09d835d5da334aff"} Dec 13 04:00:27 crc kubenswrapper[4766]: I1213 04:00:27.977567 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c86c5fb9b-hp997" event={"ID":"da4aa2bc-b65f-4188-8b6c-f89f57f8abec","Type":"ContainerStarted","Data":"384e900e752ea234cd9dc80b1e5ebfb35c07d851c1df5b0f58fc0f9963837afa"} Dec 13 04:00:27 crc kubenswrapper[4766]: I1213 04:00:27.978097 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c86c5fb9b-hp997" Dec 13 04:00:28 crc kubenswrapper[4766]: I1213 04:00:28.010557 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c86c5fb9b-hp997" podStartSLOduration=1.3984898110000001 podStartE2EDuration="12.010533118s" podCreationTimestamp="2025-12-13 04:00:16 +0000 UTC" firstStartedPulling="2025-12-13 04:00:17.221174461 +0000 UTC m=+948.731107425" lastFinishedPulling="2025-12-13 04:00:27.833217768 +0000 UTC m=+959.343150732" observedRunningTime="2025-12-13 04:00:28.002419995 +0000 UTC m=+959.512352959" watchObservedRunningTime="2025-12-13 04:00:28.010533118 +0000 UTC m=+959.520466082" Dec 13 04:00:36 crc kubenswrapper[4766]: I1213 04:00:36.967333 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c86c5fb9b-hp997" Dec 13 04:00:37 crc kubenswrapper[4766]: I1213 04:00:37.691202 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-index-dqrzv"] Dec 13 04:00:37 crc kubenswrapper[4766]: I1213 04:00:37.692400 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-dqrzv" Dec 13 04:00:37 crc kubenswrapper[4766]: I1213 04:00:37.698849 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-index-dockercfg-2zk67" Dec 13 04:00:37 crc kubenswrapper[4766]: I1213 04:00:37.746109 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-dqrzv"] Dec 13 04:00:37 crc kubenswrapper[4766]: I1213 04:00:37.847088 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfkf4\" (UniqueName: \"kubernetes.io/projected/f1c277fc-a49f-4c52-9f82-0725a8e18d43-kube-api-access-lfkf4\") pod \"infra-operator-index-dqrzv\" (UID: \"f1c277fc-a49f-4c52-9f82-0725a8e18d43\") " pod="openstack-operators/infra-operator-index-dqrzv" Dec 13 04:00:37 crc kubenswrapper[4766]: I1213 04:00:37.948862 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfkf4\" (UniqueName: \"kubernetes.io/projected/f1c277fc-a49f-4c52-9f82-0725a8e18d43-kube-api-access-lfkf4\") pod \"infra-operator-index-dqrzv\" (UID: \"f1c277fc-a49f-4c52-9f82-0725a8e18d43\") " pod="openstack-operators/infra-operator-index-dqrzv" Dec 13 04:00:37 crc kubenswrapper[4766]: I1213 04:00:37.974825 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfkf4\" (UniqueName: \"kubernetes.io/projected/f1c277fc-a49f-4c52-9f82-0725a8e18d43-kube-api-access-lfkf4\") pod \"infra-operator-index-dqrzv\" (UID: \"f1c277fc-a49f-4c52-9f82-0725a8e18d43\") " pod="openstack-operators/infra-operator-index-dqrzv" Dec 13 04:00:38 crc kubenswrapper[4766]: I1213 04:00:38.013651 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-dqrzv" Dec 13 04:00:39 crc kubenswrapper[4766]: I1213 04:00:39.083324 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-dqrzv"] Dec 13 04:00:39 crc kubenswrapper[4766]: W1213 04:00:39.109048 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1c277fc_a49f_4c52_9f82_0725a8e18d43.slice/crio-773d690cce2a9c780bdd9c285f52ae0ca28ffbb7f394abda622ec7c3eca7c6b2 WatchSource:0}: Error finding container 773d690cce2a9c780bdd9c285f52ae0ca28ffbb7f394abda622ec7c3eca7c6b2: Status 404 returned error can't find the container with id 773d690cce2a9c780bdd9c285f52ae0ca28ffbb7f394abda622ec7c3eca7c6b2 Dec 13 04:00:40 crc kubenswrapper[4766]: I1213 04:00:40.086783 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-dqrzv" event={"ID":"f1c277fc-a49f-4c52-9f82-0725a8e18d43","Type":"ContainerStarted","Data":"773d690cce2a9c780bdd9c285f52ae0ca28ffbb7f394abda622ec7c3eca7c6b2"} Dec 13 04:00:40 crc kubenswrapper[4766]: I1213 04:00:40.467691 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-dqrzv"] Dec 13 04:00:40 crc kubenswrapper[4766]: I1213 04:00:40.899514 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-index-q9rsm"] Dec 13 04:00:40 crc kubenswrapper[4766]: I1213 04:00:40.901002 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-q9rsm" Dec 13 04:00:40 crc kubenswrapper[4766]: I1213 04:00:40.911623 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-q9rsm"] Dec 13 04:00:41 crc kubenswrapper[4766]: I1213 04:00:41.099171 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qrrt\" (UniqueName: \"kubernetes.io/projected/4ea27557-d042-4498-b233-1c60ca372f49-kube-api-access-2qrrt\") pod \"infra-operator-index-q9rsm\" (UID: \"4ea27557-d042-4498-b233-1c60ca372f49\") " pod="openstack-operators/infra-operator-index-q9rsm" Dec 13 04:00:41 crc kubenswrapper[4766]: I1213 04:00:41.205293 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qrrt\" (UniqueName: \"kubernetes.io/projected/4ea27557-d042-4498-b233-1c60ca372f49-kube-api-access-2qrrt\") pod \"infra-operator-index-q9rsm\" (UID: \"4ea27557-d042-4498-b233-1c60ca372f49\") " pod="openstack-operators/infra-operator-index-q9rsm" Dec 13 04:00:41 crc kubenswrapper[4766]: I1213 04:00:41.234485 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qrrt\" (UniqueName: \"kubernetes.io/projected/4ea27557-d042-4498-b233-1c60ca372f49-kube-api-access-2qrrt\") pod \"infra-operator-index-q9rsm\" (UID: \"4ea27557-d042-4498-b233-1c60ca372f49\") " pod="openstack-operators/infra-operator-index-q9rsm" Dec 13 04:00:41 crc kubenswrapper[4766]: I1213 04:00:41.527572 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-q9rsm" Dec 13 04:00:42 crc kubenswrapper[4766]: I1213 04:00:42.058105 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-q9rsm"] Dec 13 04:00:50 crc kubenswrapper[4766]: I1213 04:00:50.502304 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-q9rsm" event={"ID":"4ea27557-d042-4498-b233-1c60ca372f49","Type":"ContainerStarted","Data":"5e1f2c3020d0d701c974b78122910960325f4406475ac0ffad9ac44049beb1f3"} Dec 13 04:01:01 crc kubenswrapper[4766]: I1213 04:01:01.113193 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-q9rsm" event={"ID":"4ea27557-d042-4498-b233-1c60ca372f49","Type":"ContainerStarted","Data":"c8396a04db5fb3193cef588178e39f5ef112c23bc10d47acb3317e2bfcb1fa1c"} Dec 13 04:01:01 crc kubenswrapper[4766]: I1213 04:01:01.115224 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-dqrzv" event={"ID":"f1c277fc-a49f-4c52-9f82-0725a8e18d43","Type":"ContainerStarted","Data":"c9afd80e69e1d817f400dce0a85666a939046bb33f0f90cddfe7bf3a5039da74"} Dec 13 04:01:01 crc kubenswrapper[4766]: I1213 04:01:01.115387 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-index-dqrzv" podUID="f1c277fc-a49f-4c52-9f82-0725a8e18d43" containerName="registry-server" containerID="cri-o://c9afd80e69e1d817f400dce0a85666a939046bb33f0f90cddfe7bf3a5039da74" gracePeriod=2 Dec 13 04:01:01 crc kubenswrapper[4766]: I1213 04:01:01.135891 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-index-q9rsm" podStartSLOduration=11.586901322 podStartE2EDuration="21.135870854s" podCreationTimestamp="2025-12-13 04:00:40 +0000 UTC" firstStartedPulling="2025-12-13 04:00:50.314304608 +0000 UTC m=+981.824237572" lastFinishedPulling="2025-12-13 04:00:59.86327414 +0000 UTC m=+991.373207104" observedRunningTime="2025-12-13 04:01:01.133519696 +0000 UTC m=+992.643452680" watchObservedRunningTime="2025-12-13 04:01:01.135870854 +0000 UTC m=+992.645803808" Dec 13 04:01:01 crc kubenswrapper[4766]: I1213 04:01:01.155801 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-index-dqrzv" podStartSLOduration=3.369765235 podStartE2EDuration="24.155782741s" podCreationTimestamp="2025-12-13 04:00:37 +0000 UTC" firstStartedPulling="2025-12-13 04:00:39.112948267 +0000 UTC m=+970.622881231" lastFinishedPulling="2025-12-13 04:00:59.898965773 +0000 UTC m=+991.408898737" observedRunningTime="2025-12-13 04:01:01.149577121 +0000 UTC m=+992.659510075" watchObservedRunningTime="2025-12-13 04:01:01.155782741 +0000 UTC m=+992.665715705" Dec 13 04:01:01 crc kubenswrapper[4766]: I1213 04:01:01.496326 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-dqrzv" Dec 13 04:01:01 crc kubenswrapper[4766]: I1213 04:01:01.528239 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-index-q9rsm" Dec 13 04:01:01 crc kubenswrapper[4766]: I1213 04:01:01.528286 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/infra-operator-index-q9rsm" Dec 13 04:01:01 crc kubenswrapper[4766]: I1213 04:01:01.544330 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfkf4\" (UniqueName: \"kubernetes.io/projected/f1c277fc-a49f-4c52-9f82-0725a8e18d43-kube-api-access-lfkf4\") pod \"f1c277fc-a49f-4c52-9f82-0725a8e18d43\" (UID: \"f1c277fc-a49f-4c52-9f82-0725a8e18d43\") " Dec 13 04:01:01 crc kubenswrapper[4766]: I1213 04:01:01.556594 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1c277fc-a49f-4c52-9f82-0725a8e18d43-kube-api-access-lfkf4" (OuterVolumeSpecName: "kube-api-access-lfkf4") pod "f1c277fc-a49f-4c52-9f82-0725a8e18d43" (UID: "f1c277fc-a49f-4c52-9f82-0725a8e18d43"). InnerVolumeSpecName "kube-api-access-lfkf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:01:01 crc kubenswrapper[4766]: I1213 04:01:01.567966 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/infra-operator-index-q9rsm" Dec 13 04:01:01 crc kubenswrapper[4766]: I1213 04:01:01.645833 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfkf4\" (UniqueName: \"kubernetes.io/projected/f1c277fc-a49f-4c52-9f82-0725a8e18d43-kube-api-access-lfkf4\") on node \"crc\" DevicePath \"\"" Dec 13 04:01:02 crc kubenswrapper[4766]: I1213 04:01:02.123505 4766 generic.go:334] "Generic (PLEG): container finished" podID="f1c277fc-a49f-4c52-9f82-0725a8e18d43" containerID="c9afd80e69e1d817f400dce0a85666a939046bb33f0f90cddfe7bf3a5039da74" exitCode=0 Dec 13 04:01:02 crc kubenswrapper[4766]: I1213 04:01:02.124321 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-dqrzv" Dec 13 04:01:02 crc kubenswrapper[4766]: I1213 04:01:02.124755 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-dqrzv" event={"ID":"f1c277fc-a49f-4c52-9f82-0725a8e18d43","Type":"ContainerDied","Data":"c9afd80e69e1d817f400dce0a85666a939046bb33f0f90cddfe7bf3a5039da74"} Dec 13 04:01:02 crc kubenswrapper[4766]: I1213 04:01:02.124790 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-dqrzv" event={"ID":"f1c277fc-a49f-4c52-9f82-0725a8e18d43","Type":"ContainerDied","Data":"773d690cce2a9c780bdd9c285f52ae0ca28ffbb7f394abda622ec7c3eca7c6b2"} Dec 13 04:01:02 crc kubenswrapper[4766]: I1213 04:01:02.124827 4766 scope.go:117] "RemoveContainer" containerID="c9afd80e69e1d817f400dce0a85666a939046bb33f0f90cddfe7bf3a5039da74" Dec 13 04:01:02 crc kubenswrapper[4766]: I1213 04:01:02.143238 4766 scope.go:117] "RemoveContainer" containerID="c9afd80e69e1d817f400dce0a85666a939046bb33f0f90cddfe7bf3a5039da74" Dec 13 04:01:02 crc kubenswrapper[4766]: E1213 04:01:02.143872 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9afd80e69e1d817f400dce0a85666a939046bb33f0f90cddfe7bf3a5039da74\": container with ID starting with c9afd80e69e1d817f400dce0a85666a939046bb33f0f90cddfe7bf3a5039da74 not found: ID does not exist" containerID="c9afd80e69e1d817f400dce0a85666a939046bb33f0f90cddfe7bf3a5039da74" Dec 13 04:01:02 crc kubenswrapper[4766]: I1213 04:01:02.143940 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9afd80e69e1d817f400dce0a85666a939046bb33f0f90cddfe7bf3a5039da74"} err="failed to get container status \"c9afd80e69e1d817f400dce0a85666a939046bb33f0f90cddfe7bf3a5039da74\": rpc error: code = NotFound desc = could not find container \"c9afd80e69e1d817f400dce0a85666a939046bb33f0f90cddfe7bf3a5039da74\": container with ID starting with c9afd80e69e1d817f400dce0a85666a939046bb33f0f90cddfe7bf3a5039da74 not found: ID does not exist" Dec 13 04:01:02 crc kubenswrapper[4766]: I1213 04:01:02.154495 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-dqrzv"] Dec 13 04:01:02 crc kubenswrapper[4766]: I1213 04:01:02.155401 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-index-dqrzv"] Dec 13 04:01:03 crc kubenswrapper[4766]: I1213 04:01:03.623914 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1c277fc-a49f-4c52-9f82-0725a8e18d43" path="/var/lib/kubelet/pods/f1c277fc-a49f-4c52-9f82-0725a8e18d43/volumes" Dec 13 04:01:04 crc kubenswrapper[4766]: I1213 04:01:04.268524 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gpl79"] Dec 13 04:01:04 crc kubenswrapper[4766]: E1213 04:01:04.269753 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c277fc-a49f-4c52-9f82-0725a8e18d43" containerName="registry-server" Dec 13 04:01:04 crc kubenswrapper[4766]: I1213 04:01:04.269860 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c277fc-a49f-4c52-9f82-0725a8e18d43" containerName="registry-server" Dec 13 04:01:04 crc kubenswrapper[4766]: I1213 04:01:04.270188 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c277fc-a49f-4c52-9f82-0725a8e18d43" containerName="registry-server" Dec 13 04:01:04 crc kubenswrapper[4766]: I1213 04:01:04.271867 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gpl79" Dec 13 04:01:04 crc kubenswrapper[4766]: I1213 04:01:04.302378 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gpl79"] Dec 13 04:01:04 crc kubenswrapper[4766]: I1213 04:01:04.399113 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e237522-d7c8-42ee-8e74-fedffdd61a34-utilities\") pod \"community-operators-gpl79\" (UID: \"9e237522-d7c8-42ee-8e74-fedffdd61a34\") " pod="openshift-marketplace/community-operators-gpl79" Dec 13 04:01:04 crc kubenswrapper[4766]: I1213 04:01:04.399167 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxm5t\" (UniqueName: \"kubernetes.io/projected/9e237522-d7c8-42ee-8e74-fedffdd61a34-kube-api-access-fxm5t\") pod \"community-operators-gpl79\" (UID: \"9e237522-d7c8-42ee-8e74-fedffdd61a34\") " pod="openshift-marketplace/community-operators-gpl79" Dec 13 04:01:04 crc kubenswrapper[4766]: I1213 04:01:04.399204 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e237522-d7c8-42ee-8e74-fedffdd61a34-catalog-content\") pod \"community-operators-gpl79\" (UID: \"9e237522-d7c8-42ee-8e74-fedffdd61a34\") " pod="openshift-marketplace/community-operators-gpl79" Dec 13 04:01:04 crc kubenswrapper[4766]: I1213 04:01:04.500053 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e237522-d7c8-42ee-8e74-fedffdd61a34-utilities\") pod \"community-operators-gpl79\" (UID: \"9e237522-d7c8-42ee-8e74-fedffdd61a34\") " pod="openshift-marketplace/community-operators-gpl79" Dec 13 04:01:04 crc kubenswrapper[4766]: I1213 04:01:04.500100 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxm5t\" (UniqueName: \"kubernetes.io/projected/9e237522-d7c8-42ee-8e74-fedffdd61a34-kube-api-access-fxm5t\") pod \"community-operators-gpl79\" (UID: \"9e237522-d7c8-42ee-8e74-fedffdd61a34\") " pod="openshift-marketplace/community-operators-gpl79" Dec 13 04:01:04 crc kubenswrapper[4766]: I1213 04:01:04.500128 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e237522-d7c8-42ee-8e74-fedffdd61a34-catalog-content\") pod \"community-operators-gpl79\" (UID: \"9e237522-d7c8-42ee-8e74-fedffdd61a34\") " pod="openshift-marketplace/community-operators-gpl79" Dec 13 04:01:04 crc kubenswrapper[4766]: I1213 04:01:04.500758 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e237522-d7c8-42ee-8e74-fedffdd61a34-utilities\") pod \"community-operators-gpl79\" (UID: \"9e237522-d7c8-42ee-8e74-fedffdd61a34\") " pod="openshift-marketplace/community-operators-gpl79" Dec 13 04:01:04 crc kubenswrapper[4766]: I1213 04:01:04.500934 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e237522-d7c8-42ee-8e74-fedffdd61a34-catalog-content\") pod \"community-operators-gpl79\" (UID: \"9e237522-d7c8-42ee-8e74-fedffdd61a34\") " pod="openshift-marketplace/community-operators-gpl79" Dec 13 04:01:04 crc kubenswrapper[4766]: I1213 04:01:04.530722 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxm5t\" (UniqueName: \"kubernetes.io/projected/9e237522-d7c8-42ee-8e74-fedffdd61a34-kube-api-access-fxm5t\") pod \"community-operators-gpl79\" (UID: \"9e237522-d7c8-42ee-8e74-fedffdd61a34\") " pod="openshift-marketplace/community-operators-gpl79" Dec 13 04:01:04 crc kubenswrapper[4766]: I1213 04:01:04.687555 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gpl79" Dec 13 04:01:05 crc kubenswrapper[4766]: I1213 04:01:05.172486 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gpl79"] Dec 13 04:01:05 crc kubenswrapper[4766]: W1213 04:01:05.178508 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e237522_d7c8_42ee_8e74_fedffdd61a34.slice/crio-76a20d959e5baf3d665739f1f303f394115f9c47791aae05c0a4192926cc62f1 WatchSource:0}: Error finding container 76a20d959e5baf3d665739f1f303f394115f9c47791aae05c0a4192926cc62f1: Status 404 returned error can't find the container with id 76a20d959e5baf3d665739f1f303f394115f9c47791aae05c0a4192926cc62f1 Dec 13 04:01:06 crc kubenswrapper[4766]: I1213 04:01:06.147091 4766 generic.go:334] "Generic (PLEG): container finished" podID="9e237522-d7c8-42ee-8e74-fedffdd61a34" containerID="fe2e59f8f1146c05be5c010e7af427dc596e19708401cb87547728781f4ac317" exitCode=0 Dec 13 04:01:06 crc kubenswrapper[4766]: I1213 04:01:06.147175 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gpl79" event={"ID":"9e237522-d7c8-42ee-8e74-fedffdd61a34","Type":"ContainerDied","Data":"fe2e59f8f1146c05be5c010e7af427dc596e19708401cb87547728781f4ac317"} Dec 13 04:01:06 crc kubenswrapper[4766]: I1213 04:01:06.147411 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gpl79" event={"ID":"9e237522-d7c8-42ee-8e74-fedffdd61a34","Type":"ContainerStarted","Data":"76a20d959e5baf3d665739f1f303f394115f9c47791aae05c0a4192926cc62f1"} Dec 13 04:01:08 crc kubenswrapper[4766]: I1213 04:01:08.162623 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gpl79" event={"ID":"9e237522-d7c8-42ee-8e74-fedffdd61a34","Type":"ContainerStarted","Data":"03336d11171b6ccc893eebd5ad46a550320086e8f43c2d460cae0c12f43bd111"} Dec 13 04:01:09 crc kubenswrapper[4766]: I1213 04:01:09.170448 4766 generic.go:334] "Generic (PLEG): container finished" podID="9e237522-d7c8-42ee-8e74-fedffdd61a34" containerID="03336d11171b6ccc893eebd5ad46a550320086e8f43c2d460cae0c12f43bd111" exitCode=0 Dec 13 04:01:09 crc kubenswrapper[4766]: I1213 04:01:09.170504 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gpl79" event={"ID":"9e237522-d7c8-42ee-8e74-fedffdd61a34","Type":"ContainerDied","Data":"03336d11171b6ccc893eebd5ad46a550320086e8f43c2d460cae0c12f43bd111"} Dec 13 04:01:09 crc kubenswrapper[4766]: I1213 04:01:09.732064 4766 patch_prober.go:28] interesting pod/machine-config-daemon-94w9l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 04:01:09 crc kubenswrapper[4766]: I1213 04:01:09.732155 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 04:01:10 crc kubenswrapper[4766]: I1213 04:01:10.179020 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gpl79" event={"ID":"9e237522-d7c8-42ee-8e74-fedffdd61a34","Type":"ContainerStarted","Data":"4fefa8977b36e78768aaef0a6dd432f2cad8ae118bf39240630c7de6d1ab61c5"} Dec 13 04:01:10 crc kubenswrapper[4766]: I1213 04:01:10.198351 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gpl79" podStartSLOduration=2.586322917 podStartE2EDuration="6.198332954s" podCreationTimestamp="2025-12-13 04:01:04 +0000 UTC" firstStartedPulling="2025-12-13 04:01:06.149291237 +0000 UTC m=+997.659224211" lastFinishedPulling="2025-12-13 04:01:09.761301284 +0000 UTC m=+1001.271234248" observedRunningTime="2025-12-13 04:01:10.196582514 +0000 UTC m=+1001.706515478" watchObservedRunningTime="2025-12-13 04:01:10.198332954 +0000 UTC m=+1001.708265918" Dec 13 04:01:11 crc kubenswrapper[4766]: I1213 04:01:11.682755 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-index-q9rsm" Dec 13 04:01:14 crc kubenswrapper[4766]: I1213 04:01:14.688618 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gpl79" Dec 13 04:01:14 crc kubenswrapper[4766]: I1213 04:01:14.689666 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gpl79" Dec 13 04:01:14 crc kubenswrapper[4766]: I1213 04:01:14.743264 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gpl79" Dec 13 04:01:15 crc kubenswrapper[4766]: I1213 04:01:15.265085 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gpl79" Dec 13 04:01:15 crc kubenswrapper[4766]: I1213 04:01:15.307708 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gpl79"] Dec 13 04:01:17 crc kubenswrapper[4766]: I1213 04:01:17.291012 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gpl79" podUID="9e237522-d7c8-42ee-8e74-fedffdd61a34" containerName="registry-server" containerID="cri-o://4fefa8977b36e78768aaef0a6dd432f2cad8ae118bf39240630c7de6d1ab61c5" gracePeriod=2 Dec 13 04:01:17 crc kubenswrapper[4766]: I1213 04:01:17.385235 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lrd86"] Dec 13 04:01:17 crc kubenswrapper[4766]: I1213 04:01:17.386851 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lrd86" Dec 13 04:01:17 crc kubenswrapper[4766]: I1213 04:01:17.409561 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lrd86"] Dec 13 04:01:17 crc kubenswrapper[4766]: I1213 04:01:17.490886 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4-utilities\") pod \"redhat-marketplace-lrd86\" (UID: \"26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4\") " pod="openshift-marketplace/redhat-marketplace-lrd86" Dec 13 04:01:17 crc kubenswrapper[4766]: I1213 04:01:17.490977 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4-catalog-content\") pod \"redhat-marketplace-lrd86\" (UID: \"26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4\") " pod="openshift-marketplace/redhat-marketplace-lrd86" Dec 13 04:01:17 crc kubenswrapper[4766]: I1213 04:01:17.491005 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k74t\" (UniqueName: \"kubernetes.io/projected/26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4-kube-api-access-6k74t\") pod \"redhat-marketplace-lrd86\" (UID: \"26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4\") " pod="openshift-marketplace/redhat-marketplace-lrd86" Dec 13 04:01:17 crc kubenswrapper[4766]: I1213 04:01:17.592559 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4-catalog-content\") pod \"redhat-marketplace-lrd86\" (UID: \"26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4\") " pod="openshift-marketplace/redhat-marketplace-lrd86" Dec 13 04:01:17 crc kubenswrapper[4766]: I1213 04:01:17.593062 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k74t\" (UniqueName: \"kubernetes.io/projected/26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4-kube-api-access-6k74t\") pod \"redhat-marketplace-lrd86\" (UID: \"26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4\") " pod="openshift-marketplace/redhat-marketplace-lrd86" Dec 13 04:01:17 crc kubenswrapper[4766]: I1213 04:01:17.593192 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4-catalog-content\") pod \"redhat-marketplace-lrd86\" (UID: \"26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4\") " pod="openshift-marketplace/redhat-marketplace-lrd86" Dec 13 04:01:17 crc kubenswrapper[4766]: I1213 04:01:17.593288 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4-utilities\") pod \"redhat-marketplace-lrd86\" (UID: \"26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4\") " pod="openshift-marketplace/redhat-marketplace-lrd86" Dec 13 04:01:17 crc kubenswrapper[4766]: I1213 04:01:17.593770 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4-utilities\") pod \"redhat-marketplace-lrd86\" (UID: \"26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4\") " pod="openshift-marketplace/redhat-marketplace-lrd86" Dec 13 04:01:17 crc kubenswrapper[4766]: I1213 04:01:17.627690 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k74t\" (UniqueName: \"kubernetes.io/projected/26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4-kube-api-access-6k74t\") pod \"redhat-marketplace-lrd86\" (UID: \"26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4\") " pod="openshift-marketplace/redhat-marketplace-lrd86" Dec 13 04:01:17 crc kubenswrapper[4766]: I1213 04:01:17.738960 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lrd86" Dec 13 04:01:18 crc kubenswrapper[4766]: I1213 04:01:18.050897 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lrd86"] Dec 13 04:01:18 crc kubenswrapper[4766]: I1213 04:01:18.299088 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrd86" event={"ID":"26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4","Type":"ContainerStarted","Data":"25c3b3fa3166226c5ab7a95f7c913f62f4255bb813db011b40a3d7f4fb95d834"} Dec 13 04:01:19 crc kubenswrapper[4766]: I1213 04:01:19.308811 4766 generic.go:334] "Generic (PLEG): container finished" podID="9e237522-d7c8-42ee-8e74-fedffdd61a34" containerID="4fefa8977b36e78768aaef0a6dd432f2cad8ae118bf39240630c7de6d1ab61c5" exitCode=0 Dec 13 04:01:19 crc kubenswrapper[4766]: I1213 04:01:19.308863 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gpl79" event={"ID":"9e237522-d7c8-42ee-8e74-fedffdd61a34","Type":"ContainerDied","Data":"4fefa8977b36e78768aaef0a6dd432f2cad8ae118bf39240630c7de6d1ab61c5"} Dec 13 04:01:19 crc kubenswrapper[4766]: I1213 04:01:19.310613 4766 generic.go:334] "Generic (PLEG): container finished" podID="26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4" containerID="29e70e2772820d5a351fe9df30bc1daa1b2cee8c7a92725f5f9061108f0a22c1" exitCode=0 Dec 13 04:01:19 crc kubenswrapper[4766]: I1213 04:01:19.310638 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrd86" event={"ID":"26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4","Type":"ContainerDied","Data":"29e70e2772820d5a351fe9df30bc1daa1b2cee8c7a92725f5f9061108f0a22c1"} Dec 13 04:01:23 crc kubenswrapper[4766]: I1213 04:01:23.338509 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gpl79" event={"ID":"9e237522-d7c8-42ee-8e74-fedffdd61a34","Type":"ContainerDied","Data":"76a20d959e5baf3d665739f1f303f394115f9c47791aae05c0a4192926cc62f1"} Dec 13 04:01:23 crc kubenswrapper[4766]: I1213 04:01:23.338893 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76a20d959e5baf3d665739f1f303f394115f9c47791aae05c0a4192926cc62f1" Dec 13 04:01:23 crc kubenswrapper[4766]: I1213 04:01:23.371187 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gpl79" Dec 13 04:01:23 crc kubenswrapper[4766]: I1213 04:01:23.481092 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e237522-d7c8-42ee-8e74-fedffdd61a34-utilities\") pod \"9e237522-d7c8-42ee-8e74-fedffdd61a34\" (UID: \"9e237522-d7c8-42ee-8e74-fedffdd61a34\") " Dec 13 04:01:23 crc kubenswrapper[4766]: I1213 04:01:23.481158 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e237522-d7c8-42ee-8e74-fedffdd61a34-catalog-content\") pod \"9e237522-d7c8-42ee-8e74-fedffdd61a34\" (UID: \"9e237522-d7c8-42ee-8e74-fedffdd61a34\") " Dec 13 04:01:23 crc kubenswrapper[4766]: I1213 04:01:23.481214 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxm5t\" (UniqueName: \"kubernetes.io/projected/9e237522-d7c8-42ee-8e74-fedffdd61a34-kube-api-access-fxm5t\") pod \"9e237522-d7c8-42ee-8e74-fedffdd61a34\" (UID: \"9e237522-d7c8-42ee-8e74-fedffdd61a34\") " Dec 13 04:01:23 crc kubenswrapper[4766]: I1213 04:01:23.482123 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e237522-d7c8-42ee-8e74-fedffdd61a34-utilities" (OuterVolumeSpecName: "utilities") pod "9e237522-d7c8-42ee-8e74-fedffdd61a34" (UID: "9e237522-d7c8-42ee-8e74-fedffdd61a34"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:01:23 crc kubenswrapper[4766]: I1213 04:01:23.486986 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e237522-d7c8-42ee-8e74-fedffdd61a34-kube-api-access-fxm5t" (OuterVolumeSpecName: "kube-api-access-fxm5t") pod "9e237522-d7c8-42ee-8e74-fedffdd61a34" (UID: "9e237522-d7c8-42ee-8e74-fedffdd61a34"). InnerVolumeSpecName "kube-api-access-fxm5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:01:23 crc kubenswrapper[4766]: I1213 04:01:23.532626 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e237522-d7c8-42ee-8e74-fedffdd61a34-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e237522-d7c8-42ee-8e74-fedffdd61a34" (UID: "9e237522-d7c8-42ee-8e74-fedffdd61a34"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:01:23 crc kubenswrapper[4766]: I1213 04:01:23.583620 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e237522-d7c8-42ee-8e74-fedffdd61a34-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 04:01:23 crc kubenswrapper[4766]: I1213 04:01:23.583736 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e237522-d7c8-42ee-8e74-fedffdd61a34-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 04:01:23 crc kubenswrapper[4766]: I1213 04:01:23.583809 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxm5t\" (UniqueName: \"kubernetes.io/projected/9e237522-d7c8-42ee-8e74-fedffdd61a34-kube-api-access-fxm5t\") on node \"crc\" DevicePath \"\"" Dec 13 04:01:24 crc kubenswrapper[4766]: I1213 04:01:24.347061 4766 generic.go:334] "Generic (PLEG): container finished" podID="26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4" containerID="24c19edf8e84349cc13bf0866545f13969608e8520f5b6fd0cdb33b3c0f3b8bc" exitCode=0 Dec 13 04:01:24 crc kubenswrapper[4766]: I1213 04:01:24.347258 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrd86" event={"ID":"26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4","Type":"ContainerDied","Data":"24c19edf8e84349cc13bf0866545f13969608e8520f5b6fd0cdb33b3c0f3b8bc"} Dec 13 04:01:24 crc kubenswrapper[4766]: I1213 04:01:24.347635 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gpl79" Dec 13 04:01:24 crc kubenswrapper[4766]: I1213 04:01:24.381255 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gpl79"] Dec 13 04:01:24 crc kubenswrapper[4766]: I1213 04:01:24.385583 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gpl79"] Dec 13 04:01:25 crc kubenswrapper[4766]: I1213 04:01:25.358342 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrd86" event={"ID":"26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4","Type":"ContainerStarted","Data":"26200d77ea8f3616c0314a1f3286e87c3988b57d9a40423778768234cd4241a0"} Dec 13 04:01:25 crc kubenswrapper[4766]: I1213 04:01:25.387156 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lrd86" podStartSLOduration=2.569096885 podStartE2EDuration="8.387133467s" podCreationTimestamp="2025-12-13 04:01:17 +0000 UTC" firstStartedPulling="2025-12-13 04:01:19.312016635 +0000 UTC m=+1010.821949599" lastFinishedPulling="2025-12-13 04:01:25.130053217 +0000 UTC m=+1016.639986181" observedRunningTime="2025-12-13 04:01:25.38414753 +0000 UTC m=+1016.894080514" watchObservedRunningTime="2025-12-13 04:01:25.387133467 +0000 UTC m=+1016.897066431" Dec 13 04:01:25 crc kubenswrapper[4766]: I1213 04:01:25.624230 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e237522-d7c8-42ee-8e74-fedffdd61a34" path="/var/lib/kubelet/pods/9e237522-d7c8-42ee-8e74-fedffdd61a34/volumes" Dec 13 04:01:27 crc kubenswrapper[4766]: I1213 04:01:27.740257 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lrd86" Dec 13 04:01:27 crc kubenswrapper[4766]: I1213 04:01:27.740333 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lrd86" Dec 13 04:01:27 crc kubenswrapper[4766]: I1213 04:01:27.786077 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lrd86" Dec 13 04:01:29 crc kubenswrapper[4766]: I1213 04:01:29.052860 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv"] Dec 13 04:01:29 crc kubenswrapper[4766]: E1213 04:01:29.053736 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e237522-d7c8-42ee-8e74-fedffdd61a34" containerName="extract-utilities" Dec 13 04:01:29 crc kubenswrapper[4766]: I1213 04:01:29.053767 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e237522-d7c8-42ee-8e74-fedffdd61a34" containerName="extract-utilities" Dec 13 04:01:29 crc kubenswrapper[4766]: E1213 04:01:29.053786 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e237522-d7c8-42ee-8e74-fedffdd61a34" containerName="extract-content" Dec 13 04:01:29 crc kubenswrapper[4766]: I1213 04:01:29.053796 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e237522-d7c8-42ee-8e74-fedffdd61a34" containerName="extract-content" Dec 13 04:01:29 crc kubenswrapper[4766]: E1213 04:01:29.053810 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e237522-d7c8-42ee-8e74-fedffdd61a34" containerName="registry-server" Dec 13 04:01:29 crc kubenswrapper[4766]: I1213 04:01:29.053820 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e237522-d7c8-42ee-8e74-fedffdd61a34" containerName="registry-server" Dec 13 04:01:29 crc kubenswrapper[4766]: I1213 04:01:29.053956 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e237522-d7c8-42ee-8e74-fedffdd61a34" containerName="registry-server" Dec 13 04:01:29 crc kubenswrapper[4766]: I1213 04:01:29.055778 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv" Dec 13 04:01:29 crc kubenswrapper[4766]: I1213 04:01:29.058726 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-vg794" Dec 13 04:01:29 crc kubenswrapper[4766]: I1213 04:01:29.065596 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4abca1b2-000f-4ba5-8f32-012dafeb6043-util\") pod \"353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv\" (UID: \"4abca1b2-000f-4ba5-8f32-012dafeb6043\") " pod="openstack-operators/353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv" Dec 13 04:01:29 crc kubenswrapper[4766]: I1213 04:01:29.065711 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tsj4\" (UniqueName: \"kubernetes.io/projected/4abca1b2-000f-4ba5-8f32-012dafeb6043-kube-api-access-9tsj4\") pod \"353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv\" (UID: \"4abca1b2-000f-4ba5-8f32-012dafeb6043\") " pod="openstack-operators/353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv" Dec 13 04:01:29 crc kubenswrapper[4766]: I1213 04:01:29.065760 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4abca1b2-000f-4ba5-8f32-012dafeb6043-bundle\") pod \"353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv\" (UID: \"4abca1b2-000f-4ba5-8f32-012dafeb6043\") " pod="openstack-operators/353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv" Dec 13 04:01:29 crc kubenswrapper[4766]: I1213 04:01:29.065948 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv"] Dec 13 04:01:29 crc kubenswrapper[4766]: I1213 04:01:29.167010 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4abca1b2-000f-4ba5-8f32-012dafeb6043-bundle\") pod \"353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv\" (UID: \"4abca1b2-000f-4ba5-8f32-012dafeb6043\") " pod="openstack-operators/353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv" Dec 13 04:01:29 crc kubenswrapper[4766]: I1213 04:01:29.167200 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4abca1b2-000f-4ba5-8f32-012dafeb6043-util\") pod \"353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv\" (UID: \"4abca1b2-000f-4ba5-8f32-012dafeb6043\") " pod="openstack-operators/353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv" Dec 13 04:01:29 crc kubenswrapper[4766]: I1213 04:01:29.167288 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tsj4\" (UniqueName: \"kubernetes.io/projected/4abca1b2-000f-4ba5-8f32-012dafeb6043-kube-api-access-9tsj4\") pod \"353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv\" (UID: \"4abca1b2-000f-4ba5-8f32-012dafeb6043\") " pod="openstack-operators/353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv" Dec 13 04:01:29 crc kubenswrapper[4766]: I1213 04:01:29.168012 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4abca1b2-000f-4ba5-8f32-012dafeb6043-bundle\") pod \"353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv\" (UID: \"4abca1b2-000f-4ba5-8f32-012dafeb6043\") " pod="openstack-operators/353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv" Dec 13 04:01:29 crc kubenswrapper[4766]: I1213 04:01:29.168310 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4abca1b2-000f-4ba5-8f32-012dafeb6043-util\") pod \"353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv\" (UID: \"4abca1b2-000f-4ba5-8f32-012dafeb6043\") " pod="openstack-operators/353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv" Dec 13 04:01:29 crc kubenswrapper[4766]: I1213 04:01:29.192638 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tsj4\" (UniqueName: \"kubernetes.io/projected/4abca1b2-000f-4ba5-8f32-012dafeb6043-kube-api-access-9tsj4\") pod \"353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv\" (UID: \"4abca1b2-000f-4ba5-8f32-012dafeb6043\") " pod="openstack-operators/353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv" Dec 13 04:01:29 crc kubenswrapper[4766]: I1213 04:01:29.387010 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv" Dec 13 04:01:29 crc kubenswrapper[4766]: I1213 04:01:29.635504 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv"] Dec 13 04:01:30 crc kubenswrapper[4766]: I1213 04:01:30.391110 4766 generic.go:334] "Generic (PLEG): container finished" podID="4abca1b2-000f-4ba5-8f32-012dafeb6043" containerID="6cace8b66c375add8f1bc812ee82ebba6ea67103bef188b01a8be8deb506a79b" exitCode=0 Dec 13 04:01:30 crc kubenswrapper[4766]: I1213 04:01:30.391201 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv" event={"ID":"4abca1b2-000f-4ba5-8f32-012dafeb6043","Type":"ContainerDied","Data":"6cace8b66c375add8f1bc812ee82ebba6ea67103bef188b01a8be8deb506a79b"} Dec 13 04:01:30 crc kubenswrapper[4766]: I1213 04:01:30.391410 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv" event={"ID":"4abca1b2-000f-4ba5-8f32-012dafeb6043","Type":"ContainerStarted","Data":"3b14b83060ab9346559ef8e2e4608c6342d4a84e272fade8f2f2b5d09a904e00"} Dec 13 04:01:31 crc kubenswrapper[4766]: I1213 04:01:31.400492 4766 generic.go:334] "Generic (PLEG): container finished" podID="4abca1b2-000f-4ba5-8f32-012dafeb6043" containerID="e7455179217e26a69700847d66aaffbd05b4e6219feded8504744292be532f63" exitCode=0 Dec 13 04:01:31 crc kubenswrapper[4766]: I1213 04:01:31.400646 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv" event={"ID":"4abca1b2-000f-4ba5-8f32-012dafeb6043","Type":"ContainerDied","Data":"e7455179217e26a69700847d66aaffbd05b4e6219feded8504744292be532f63"} Dec 13 04:01:32 crc kubenswrapper[4766]: I1213 04:01:32.416224 4766 generic.go:334] "Generic (PLEG): container finished" podID="4abca1b2-000f-4ba5-8f32-012dafeb6043" containerID="c347bc958816511cfd62d9f7838235446ac5332a72e42a971c27f5a1c6c38f9a" exitCode=0 Dec 13 04:01:32 crc kubenswrapper[4766]: I1213 04:01:32.416267 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv" event={"ID":"4abca1b2-000f-4ba5-8f32-012dafeb6043","Type":"ContainerDied","Data":"c347bc958816511cfd62d9f7838235446ac5332a72e42a971c27f5a1c6c38f9a"} Dec 13 04:01:33 crc kubenswrapper[4766]: I1213 04:01:33.682079 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv" Dec 13 04:01:33 crc kubenswrapper[4766]: I1213 04:01:33.758850 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tsj4\" (UniqueName: \"kubernetes.io/projected/4abca1b2-000f-4ba5-8f32-012dafeb6043-kube-api-access-9tsj4\") pod \"4abca1b2-000f-4ba5-8f32-012dafeb6043\" (UID: \"4abca1b2-000f-4ba5-8f32-012dafeb6043\") " Dec 13 04:01:33 crc kubenswrapper[4766]: I1213 04:01:33.759061 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4abca1b2-000f-4ba5-8f32-012dafeb6043-bundle\") pod \"4abca1b2-000f-4ba5-8f32-012dafeb6043\" (UID: \"4abca1b2-000f-4ba5-8f32-012dafeb6043\") " Dec 13 04:01:33 crc kubenswrapper[4766]: I1213 04:01:33.759116 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4abca1b2-000f-4ba5-8f32-012dafeb6043-util\") pod \"4abca1b2-000f-4ba5-8f32-012dafeb6043\" (UID: \"4abca1b2-000f-4ba5-8f32-012dafeb6043\") " Dec 13 04:01:33 crc kubenswrapper[4766]: I1213 04:01:33.760042 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4abca1b2-000f-4ba5-8f32-012dafeb6043-bundle" (OuterVolumeSpecName: "bundle") pod "4abca1b2-000f-4ba5-8f32-012dafeb6043" (UID: "4abca1b2-000f-4ba5-8f32-012dafeb6043"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:01:33 crc kubenswrapper[4766]: I1213 04:01:33.768563 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4abca1b2-000f-4ba5-8f32-012dafeb6043-kube-api-access-9tsj4" (OuterVolumeSpecName: "kube-api-access-9tsj4") pod "4abca1b2-000f-4ba5-8f32-012dafeb6043" (UID: "4abca1b2-000f-4ba5-8f32-012dafeb6043"). InnerVolumeSpecName "kube-api-access-9tsj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:01:33 crc kubenswrapper[4766]: I1213 04:01:33.773850 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4abca1b2-000f-4ba5-8f32-012dafeb6043-util" (OuterVolumeSpecName: "util") pod "4abca1b2-000f-4ba5-8f32-012dafeb6043" (UID: "4abca1b2-000f-4ba5-8f32-012dafeb6043"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:01:33 crc kubenswrapper[4766]: I1213 04:01:33.860446 4766 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4abca1b2-000f-4ba5-8f32-012dafeb6043-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 04:01:33 crc kubenswrapper[4766]: I1213 04:01:33.860496 4766 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4abca1b2-000f-4ba5-8f32-012dafeb6043-util\") on node \"crc\" DevicePath \"\"" Dec 13 04:01:33 crc kubenswrapper[4766]: I1213 04:01:33.860510 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tsj4\" (UniqueName: \"kubernetes.io/projected/4abca1b2-000f-4ba5-8f32-012dafeb6043-kube-api-access-9tsj4\") on node \"crc\" DevicePath \"\"" Dec 13 04:01:34 crc kubenswrapper[4766]: I1213 04:01:34.431640 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv" event={"ID":"4abca1b2-000f-4ba5-8f32-012dafeb6043","Type":"ContainerDied","Data":"3b14b83060ab9346559ef8e2e4608c6342d4a84e272fade8f2f2b5d09a904e00"} Dec 13 04:01:34 crc kubenswrapper[4766]: I1213 04:01:34.431741 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b14b83060ab9346559ef8e2e4608c6342d4a84e272fade8f2f2b5d09a904e00" Dec 13 04:01:34 crc kubenswrapper[4766]: I1213 04:01:34.431704 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv" Dec 13 04:01:37 crc kubenswrapper[4766]: I1213 04:01:37.795381 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lrd86" Dec 13 04:01:39 crc kubenswrapper[4766]: I1213 04:01:39.732206 4766 patch_prober.go:28] interesting pod/machine-config-daemon-94w9l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 04:01:39 crc kubenswrapper[4766]: I1213 04:01:39.732660 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 04:01:42 crc kubenswrapper[4766]: I1213 04:01:42.204971 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lrd86"] Dec 13 04:01:42 crc kubenswrapper[4766]: I1213 04:01:42.206730 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lrd86" podUID="26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4" containerName="registry-server" containerID="cri-o://26200d77ea8f3616c0314a1f3286e87c3988b57d9a40423778768234cd4241a0" gracePeriod=2 Dec 13 04:01:42 crc kubenswrapper[4766]: I1213 04:01:42.929253 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-8cbcb47f-6sh6l"] Dec 13 04:01:42 crc kubenswrapper[4766]: E1213 04:01:42.929588 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4abca1b2-000f-4ba5-8f32-012dafeb6043" containerName="util" Dec 13 04:01:42 crc kubenswrapper[4766]: I1213 04:01:42.929610 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4abca1b2-000f-4ba5-8f32-012dafeb6043" containerName="util" Dec 13 04:01:42 crc kubenswrapper[4766]: E1213 04:01:42.929623 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4abca1b2-000f-4ba5-8f32-012dafeb6043" containerName="extract" Dec 13 04:01:42 crc kubenswrapper[4766]: I1213 04:01:42.929629 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4abca1b2-000f-4ba5-8f32-012dafeb6043" containerName="extract" Dec 13 04:01:42 crc kubenswrapper[4766]: E1213 04:01:42.929637 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4abca1b2-000f-4ba5-8f32-012dafeb6043" containerName="pull" Dec 13 04:01:42 crc kubenswrapper[4766]: I1213 04:01:42.929643 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4abca1b2-000f-4ba5-8f32-012dafeb6043" containerName="pull" Dec 13 04:01:42 crc kubenswrapper[4766]: I1213 04:01:42.929758 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4abca1b2-000f-4ba5-8f32-012dafeb6043" containerName="extract" Dec 13 04:01:42 crc kubenswrapper[4766]: I1213 04:01:42.930446 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-8cbcb47f-6sh6l" Dec 13 04:01:42 crc kubenswrapper[4766]: I1213 04:01:42.933999 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-8clhc" Dec 13 04:01:42 crc kubenswrapper[4766]: I1213 04:01:42.934443 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-service-cert" Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.059053 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7de0f18b-5607-4711-ace6-6a841491a182-webhook-cert\") pod \"infra-operator-controller-manager-8cbcb47f-6sh6l\" (UID: \"7de0f18b-5607-4711-ace6-6a841491a182\") " pod="openstack-operators/infra-operator-controller-manager-8cbcb47f-6sh6l" Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.059125 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7de0f18b-5607-4711-ace6-6a841491a182-apiservice-cert\") pod \"infra-operator-controller-manager-8cbcb47f-6sh6l\" (UID: \"7de0f18b-5607-4711-ace6-6a841491a182\") " pod="openstack-operators/infra-operator-controller-manager-8cbcb47f-6sh6l" Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.059185 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wprfv\" (UniqueName: \"kubernetes.io/projected/7de0f18b-5607-4711-ace6-6a841491a182-kube-api-access-wprfv\") pod \"infra-operator-controller-manager-8cbcb47f-6sh6l\" (UID: \"7de0f18b-5607-4711-ace6-6a841491a182\") " pod="openstack-operators/infra-operator-controller-manager-8cbcb47f-6sh6l" Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.107556 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-8cbcb47f-6sh6l"] Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.161202 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wprfv\" (UniqueName: \"kubernetes.io/projected/7de0f18b-5607-4711-ace6-6a841491a182-kube-api-access-wprfv\") pod \"infra-operator-controller-manager-8cbcb47f-6sh6l\" (UID: \"7de0f18b-5607-4711-ace6-6a841491a182\") " pod="openstack-operators/infra-operator-controller-manager-8cbcb47f-6sh6l" Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.161278 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7de0f18b-5607-4711-ace6-6a841491a182-webhook-cert\") pod \"infra-operator-controller-manager-8cbcb47f-6sh6l\" (UID: \"7de0f18b-5607-4711-ace6-6a841491a182\") " pod="openstack-operators/infra-operator-controller-manager-8cbcb47f-6sh6l" Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.161353 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7de0f18b-5607-4711-ace6-6a841491a182-apiservice-cert\") pod \"infra-operator-controller-manager-8cbcb47f-6sh6l\" (UID: \"7de0f18b-5607-4711-ace6-6a841491a182\") " pod="openstack-operators/infra-operator-controller-manager-8cbcb47f-6sh6l" Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.172310 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7de0f18b-5607-4711-ace6-6a841491a182-apiservice-cert\") pod \"infra-operator-controller-manager-8cbcb47f-6sh6l\" (UID: \"7de0f18b-5607-4711-ace6-6a841491a182\") " pod="openstack-operators/infra-operator-controller-manager-8cbcb47f-6sh6l" Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.172920 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7de0f18b-5607-4711-ace6-6a841491a182-webhook-cert\") pod \"infra-operator-controller-manager-8cbcb47f-6sh6l\" (UID: \"7de0f18b-5607-4711-ace6-6a841491a182\") " pod="openstack-operators/infra-operator-controller-manager-8cbcb47f-6sh6l" Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.222712 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wprfv\" (UniqueName: \"kubernetes.io/projected/7de0f18b-5607-4711-ace6-6a841491a182-kube-api-access-wprfv\") pod \"infra-operator-controller-manager-8cbcb47f-6sh6l\" (UID: \"7de0f18b-5607-4711-ace6-6a841491a182\") " pod="openstack-operators/infra-operator-controller-manager-8cbcb47f-6sh6l" Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.374951 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-8cbcb47f-6sh6l" Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.436807 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lrd86" Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.485872 4766 generic.go:334] "Generic (PLEG): container finished" podID="26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4" containerID="26200d77ea8f3616c0314a1f3286e87c3988b57d9a40423778768234cd4241a0" exitCode=0 Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.485925 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrd86" event={"ID":"26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4","Type":"ContainerDied","Data":"26200d77ea8f3616c0314a1f3286e87c3988b57d9a40423778768234cd4241a0"} Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.485962 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrd86" event={"ID":"26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4","Type":"ContainerDied","Data":"25c3b3fa3166226c5ab7a95f7c913f62f4255bb813db011b40a3d7f4fb95d834"} Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.485988 4766 scope.go:117] "RemoveContainer" containerID="26200d77ea8f3616c0314a1f3286e87c3988b57d9a40423778768234cd4241a0" Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.486118 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lrd86" Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.511639 4766 scope.go:117] "RemoveContainer" containerID="24c19edf8e84349cc13bf0866545f13969608e8520f5b6fd0cdb33b3c0f3b8bc" Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.566881 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4-utilities\") pod \"26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4\" (UID: \"26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4\") " Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.567065 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4-catalog-content\") pod \"26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4\" (UID: \"26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4\") " Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.567189 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6k74t\" (UniqueName: \"kubernetes.io/projected/26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4-kube-api-access-6k74t\") pod \"26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4\" (UID: \"26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4\") " Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.568314 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4-utilities" (OuterVolumeSpecName: "utilities") pod "26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4" (UID: "26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.571752 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4-kube-api-access-6k74t" (OuterVolumeSpecName: "kube-api-access-6k74t") pod "26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4" (UID: "26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4"). InnerVolumeSpecName "kube-api-access-6k74t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.572729 4766 scope.go:117] "RemoveContainer" containerID="29e70e2772820d5a351fe9df30bc1daa1b2cee8c7a92725f5f9061108f0a22c1" Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.613558 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4" (UID: "26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.671102 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6k74t\" (UniqueName: \"kubernetes.io/projected/26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4-kube-api-access-6k74t\") on node \"crc\" DevicePath \"\"" Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.671147 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.671164 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.709660 4766 scope.go:117] "RemoveContainer" containerID="26200d77ea8f3616c0314a1f3286e87c3988b57d9a40423778768234cd4241a0" Dec 13 04:01:43 crc kubenswrapper[4766]: E1213 04:01:43.713653 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26200d77ea8f3616c0314a1f3286e87c3988b57d9a40423778768234cd4241a0\": container with ID starting with 26200d77ea8f3616c0314a1f3286e87c3988b57d9a40423778768234cd4241a0 not found: ID does not exist" containerID="26200d77ea8f3616c0314a1f3286e87c3988b57d9a40423778768234cd4241a0" Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.713696 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26200d77ea8f3616c0314a1f3286e87c3988b57d9a40423778768234cd4241a0"} err="failed to get container status \"26200d77ea8f3616c0314a1f3286e87c3988b57d9a40423778768234cd4241a0\": rpc error: code = NotFound desc = could not find container \"26200d77ea8f3616c0314a1f3286e87c3988b57d9a40423778768234cd4241a0\": container with ID starting with 26200d77ea8f3616c0314a1f3286e87c3988b57d9a40423778768234cd4241a0 not found: ID does not exist" Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.713728 4766 scope.go:117] "RemoveContainer" containerID="24c19edf8e84349cc13bf0866545f13969608e8520f5b6fd0cdb33b3c0f3b8bc" Dec 13 04:01:43 crc kubenswrapper[4766]: E1213 04:01:43.714265 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24c19edf8e84349cc13bf0866545f13969608e8520f5b6fd0cdb33b3c0f3b8bc\": container with ID starting with 24c19edf8e84349cc13bf0866545f13969608e8520f5b6fd0cdb33b3c0f3b8bc not found: ID does not exist" containerID="24c19edf8e84349cc13bf0866545f13969608e8520f5b6fd0cdb33b3c0f3b8bc" Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.714344 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24c19edf8e84349cc13bf0866545f13969608e8520f5b6fd0cdb33b3c0f3b8bc"} err="failed to get container status \"24c19edf8e84349cc13bf0866545f13969608e8520f5b6fd0cdb33b3c0f3b8bc\": rpc error: code = NotFound desc = could not find container \"24c19edf8e84349cc13bf0866545f13969608e8520f5b6fd0cdb33b3c0f3b8bc\": container with ID starting with 24c19edf8e84349cc13bf0866545f13969608e8520f5b6fd0cdb33b3c0f3b8bc not found: ID does not exist" Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.714391 4766 scope.go:117] "RemoveContainer" containerID="29e70e2772820d5a351fe9df30bc1daa1b2cee8c7a92725f5f9061108f0a22c1" Dec 13 04:01:43 crc kubenswrapper[4766]: E1213 04:01:43.714850 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29e70e2772820d5a351fe9df30bc1daa1b2cee8c7a92725f5f9061108f0a22c1\": container with ID starting with 29e70e2772820d5a351fe9df30bc1daa1b2cee8c7a92725f5f9061108f0a22c1 not found: ID does not exist" containerID="29e70e2772820d5a351fe9df30bc1daa1b2cee8c7a92725f5f9061108f0a22c1" Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.714882 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29e70e2772820d5a351fe9df30bc1daa1b2cee8c7a92725f5f9061108f0a22c1"} err="failed to get container status \"29e70e2772820d5a351fe9df30bc1daa1b2cee8c7a92725f5f9061108f0a22c1\": rpc error: code = NotFound desc = could not find container \"29e70e2772820d5a351fe9df30bc1daa1b2cee8c7a92725f5f9061108f0a22c1\": container with ID starting with 29e70e2772820d5a351fe9df30bc1daa1b2cee8c7a92725f5f9061108f0a22c1 not found: ID does not exist" Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.811617 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lrd86"] Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.817534 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lrd86"] Dec 13 04:01:43 crc kubenswrapper[4766]: I1213 04:01:43.848011 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-8cbcb47f-6sh6l"] Dec 13 04:01:44 crc kubenswrapper[4766]: I1213 04:01:44.497630 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-8cbcb47f-6sh6l" event={"ID":"7de0f18b-5607-4711-ace6-6a841491a182","Type":"ContainerStarted","Data":"807e5f705c1a8cf94e181bf4218738debd3f40b900c4d257a3c365a97e34a8e5"} Dec 13 04:01:45 crc kubenswrapper[4766]: I1213 04:01:45.627025 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4" path="/var/lib/kubelet/pods/26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4/volumes" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.592480 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/openstack-galera-0"] Dec 13 04:01:46 crc kubenswrapper[4766]: E1213 04:01:46.592747 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4" containerName="extract-content" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.592760 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4" containerName="extract-content" Dec 13 04:01:46 crc kubenswrapper[4766]: E1213 04:01:46.592779 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4" containerName="registry-server" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.592787 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4" containerName="registry-server" Dec 13 04:01:46 crc kubenswrapper[4766]: E1213 04:01:46.592798 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4" containerName="extract-utilities" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.592806 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4" containerName="extract-utilities" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.592934 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="26c829f4-fb00-4d2e-b9d4-b4c5ea8ce2c4" containerName="registry-server" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.593743 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-0" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.595441 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"osp-secret" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.600936 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openshift-service-ca.crt" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.606744 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-scripts" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.606987 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-config-data" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.618385 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-0"] Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.618644 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"galera-openstack-dockercfg-qvs2l" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.620022 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"kube-root-ca.crt" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.628001 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/openstack-galera-1"] Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.629505 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-1" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.635190 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/openstack-galera-2"] Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.636937 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-2" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.657672 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-1"] Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.669537 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rtqrm"] Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.671011 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rtqrm" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.685899 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-2"] Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.711645 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rtqrm"] Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.769282 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d1d4ee6-5f01-4c7b-b326-6f4dc686022e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8d1d4ee6-5f01-4c7b-b326-6f4dc686022e\") " pod="glance-kuttl-tests/openstack-galera-0" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.769341 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cfe14027-5466-43b9-90a1-04ea55370210-config-data-default\") pod \"openstack-galera-1\" (UID: \"cfe14027-5466-43b9-90a1-04ea55370210\") " pod="glance-kuttl-tests/openstack-galera-1" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.769375 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cfe14027-5466-43b9-90a1-04ea55370210-kolla-config\") pod \"openstack-galera-1\" (UID: \"cfe14027-5466-43b9-90a1-04ea55370210\") " pod="glance-kuttl-tests/openstack-galera-1" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.769398 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/8d1d4ee6-5f01-4c7b-b326-6f4dc686022e-secrets\") pod \"openstack-galera-0\" (UID: \"8d1d4ee6-5f01-4c7b-b326-6f4dc686022e\") " pod="glance-kuttl-tests/openstack-galera-0" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.769447 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cfe14027-5466-43b9-90a1-04ea55370210-config-data-generated\") pod \"openstack-galera-1\" (UID: \"cfe14027-5466-43b9-90a1-04ea55370210\") " pod="glance-kuttl-tests/openstack-galera-1" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.769473 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h69z2\" (UniqueName: \"kubernetes.io/projected/19b5d6fe-47e9-4816-907d-af0d46b556d2-kube-api-access-h69z2\") pod \"openstack-galera-2\" (UID: \"19b5d6fe-47e9-4816-907d-af0d46b556d2\") " pod="glance-kuttl-tests/openstack-galera-2" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.769494 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/19b5d6fe-47e9-4816-907d-af0d46b556d2-kolla-config\") pod \"openstack-galera-2\" (UID: \"19b5d6fe-47e9-4816-907d-af0d46b556d2\") " pod="glance-kuttl-tests/openstack-galera-2" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.769531 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfe14027-5466-43b9-90a1-04ea55370210-operator-scripts\") pod \"openstack-galera-1\" (UID: \"cfe14027-5466-43b9-90a1-04ea55370210\") " pod="glance-kuttl-tests/openstack-galera-1" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.769566 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"8d1d4ee6-5f01-4c7b-b326-6f4dc686022e\") " pod="glance-kuttl-tests/openstack-galera-0" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.769618 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8hrd\" (UniqueName: \"kubernetes.io/projected/cfe14027-5466-43b9-90a1-04ea55370210-kube-api-access-q8hrd\") pod \"openstack-galera-1\" (UID: \"cfe14027-5466-43b9-90a1-04ea55370210\") " pod="glance-kuttl-tests/openstack-galera-1" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.769674 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8d1d4ee6-5f01-4c7b-b326-6f4dc686022e-kolla-config\") pod \"openstack-galera-0\" (UID: \"8d1d4ee6-5f01-4c7b-b326-6f4dc686022e\") " pod="glance-kuttl-tests/openstack-galera-0" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.769699 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/19b5d6fe-47e9-4816-907d-af0d46b556d2-config-data-default\") pod \"openstack-galera-2\" (UID: \"19b5d6fe-47e9-4816-907d-af0d46b556d2\") " pod="glance-kuttl-tests/openstack-galera-2" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.769732 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/19b5d6fe-47e9-4816-907d-af0d46b556d2-config-data-generated\") pod \"openstack-galera-2\" (UID: \"19b5d6fe-47e9-4816-907d-af0d46b556d2\") " pod="glance-kuttl-tests/openstack-galera-2" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.769754 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8d1d4ee6-5f01-4c7b-b326-6f4dc686022e-config-data-default\") pod \"openstack-galera-0\" (UID: \"8d1d4ee6-5f01-4c7b-b326-6f4dc686022e\") " pod="glance-kuttl-tests/openstack-galera-0" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.769776 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19b5d6fe-47e9-4816-907d-af0d46b556d2-operator-scripts\") pod \"openstack-galera-2\" (UID: \"19b5d6fe-47e9-4816-907d-af0d46b556d2\") " pod="glance-kuttl-tests/openstack-galera-2" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.769799 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-1\" (UID: \"cfe14027-5466-43b9-90a1-04ea55370210\") " pod="glance-kuttl-tests/openstack-galera-1" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.769819 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/cfe14027-5466-43b9-90a1-04ea55370210-secrets\") pod \"openstack-galera-1\" (UID: \"cfe14027-5466-43b9-90a1-04ea55370210\") " pod="glance-kuttl-tests/openstack-galera-1" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.769840 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/19b5d6fe-47e9-4816-907d-af0d46b556d2-secrets\") pod \"openstack-galera-2\" (UID: \"19b5d6fe-47e9-4816-907d-af0d46b556d2\") " pod="glance-kuttl-tests/openstack-galera-2" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.769881 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7798p\" (UniqueName: \"kubernetes.io/projected/8d1d4ee6-5f01-4c7b-b326-6f4dc686022e-kube-api-access-7798p\") pod \"openstack-galera-0\" (UID: \"8d1d4ee6-5f01-4c7b-b326-6f4dc686022e\") " pod="glance-kuttl-tests/openstack-galera-0" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.769903 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-2\" (UID: \"19b5d6fe-47e9-4816-907d-af0d46b556d2\") " pod="glance-kuttl-tests/openstack-galera-2" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.769960 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8d1d4ee6-5f01-4c7b-b326-6f4dc686022e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8d1d4ee6-5f01-4c7b-b326-6f4dc686022e\") " pod="glance-kuttl-tests/openstack-galera-0" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.873052 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8d1d4ee6-5f01-4c7b-b326-6f4dc686022e-kolla-config\") pod \"openstack-galera-0\" (UID: \"8d1d4ee6-5f01-4c7b-b326-6f4dc686022e\") " pod="glance-kuttl-tests/openstack-galera-0" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.871722 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8d1d4ee6-5f01-4c7b-b326-6f4dc686022e-kolla-config\") pod \"openstack-galera-0\" (UID: \"8d1d4ee6-5f01-4c7b-b326-6f4dc686022e\") " pod="glance-kuttl-tests/openstack-galera-0" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.873215 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/19b5d6fe-47e9-4816-907d-af0d46b556d2-config-data-default\") pod \"openstack-galera-2\" (UID: \"19b5d6fe-47e9-4816-907d-af0d46b556d2\") " pod="glance-kuttl-tests/openstack-galera-2" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.873242 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/19b5d6fe-47e9-4816-907d-af0d46b556d2-config-data-generated\") pod \"openstack-galera-2\" (UID: \"19b5d6fe-47e9-4816-907d-af0d46b556d2\") " pod="glance-kuttl-tests/openstack-galera-2" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.873821 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/19b5d6fe-47e9-4816-907d-af0d46b556d2-config-data-generated\") pod \"openstack-galera-2\" (UID: \"19b5d6fe-47e9-4816-907d-af0d46b556d2\") " pod="glance-kuttl-tests/openstack-galera-2" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.874124 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/19b5d6fe-47e9-4816-907d-af0d46b556d2-config-data-default\") pod \"openstack-galera-2\" (UID: \"19b5d6fe-47e9-4816-907d-af0d46b556d2\") " pod="glance-kuttl-tests/openstack-galera-2" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.874213 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8d1d4ee6-5f01-4c7b-b326-6f4dc686022e-config-data-default\") pod \"openstack-galera-0\" (UID: \"8d1d4ee6-5f01-4c7b-b326-6f4dc686022e\") " pod="glance-kuttl-tests/openstack-galera-0" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.874530 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19b5d6fe-47e9-4816-907d-af0d46b556d2-operator-scripts\") pod \"openstack-galera-2\" (UID: \"19b5d6fe-47e9-4816-907d-af0d46b556d2\") " pod="glance-kuttl-tests/openstack-galera-2" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.875039 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-1\" (UID: \"cfe14027-5466-43b9-90a1-04ea55370210\") device mount path \"/mnt/openstack/pv02\"" pod="glance-kuttl-tests/openstack-galera-1" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.876271 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19b5d6fe-47e9-4816-907d-af0d46b556d2-operator-scripts\") pod \"openstack-galera-2\" (UID: \"19b5d6fe-47e9-4816-907d-af0d46b556d2\") " pod="glance-kuttl-tests/openstack-galera-2" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.874563 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-1\" (UID: \"cfe14027-5466-43b9-90a1-04ea55370210\") " pod="glance-kuttl-tests/openstack-galera-1" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.876358 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/cfe14027-5466-43b9-90a1-04ea55370210-secrets\") pod \"openstack-galera-1\" (UID: \"cfe14027-5466-43b9-90a1-04ea55370210\") " pod="glance-kuttl-tests/openstack-galera-1" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.876390 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/19b5d6fe-47e9-4816-907d-af0d46b556d2-secrets\") pod \"openstack-galera-2\" (UID: \"19b5d6fe-47e9-4816-907d-af0d46b556d2\") " pod="glance-kuttl-tests/openstack-galera-2" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.877837 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7798p\" (UniqueName: \"kubernetes.io/projected/8d1d4ee6-5f01-4c7b-b326-6f4dc686022e-kube-api-access-7798p\") pod \"openstack-galera-0\" (UID: \"8d1d4ee6-5f01-4c7b-b326-6f4dc686022e\") " pod="glance-kuttl-tests/openstack-galera-0" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.878142 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-2\" (UID: \"19b5d6fe-47e9-4816-907d-af0d46b556d2\") " pod="glance-kuttl-tests/openstack-galera-2" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.878222 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8d1d4ee6-5f01-4c7b-b326-6f4dc686022e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8d1d4ee6-5f01-4c7b-b326-6f4dc686022e\") " pod="glance-kuttl-tests/openstack-galera-0" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.878280 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d1d4ee6-5f01-4c7b-b326-6f4dc686022e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8d1d4ee6-5f01-4c7b-b326-6f4dc686022e\") " pod="glance-kuttl-tests/openstack-galera-0" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.878314 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cfe14027-5466-43b9-90a1-04ea55370210-config-data-default\") pod \"openstack-galera-1\" (UID: \"cfe14027-5466-43b9-90a1-04ea55370210\") " pod="glance-kuttl-tests/openstack-galera-1" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.878451 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cfe14027-5466-43b9-90a1-04ea55370210-kolla-config\") pod \"openstack-galera-1\" (UID: \"cfe14027-5466-43b9-90a1-04ea55370210\") " pod="glance-kuttl-tests/openstack-galera-1" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.878799 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8d1d4ee6-5f01-4c7b-b326-6f4dc686022e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8d1d4ee6-5f01-4c7b-b326-6f4dc686022e\") " pod="glance-kuttl-tests/openstack-galera-0" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.878938 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-2\" (UID: \"19b5d6fe-47e9-4816-907d-af0d46b556d2\") device mount path \"/mnt/openstack/pv09\"" pod="glance-kuttl-tests/openstack-galera-2" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.879382 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/8d1d4ee6-5f01-4c7b-b326-6f4dc686022e-secrets\") pod \"openstack-galera-0\" (UID: \"8d1d4ee6-5f01-4c7b-b326-6f4dc686022e\") " pod="glance-kuttl-tests/openstack-galera-0" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.879504 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7ae1462-84a6-4eb2-a1ee-3b1634e09da2-utilities\") pod \"certified-operators-rtqrm\" (UID: \"f7ae1462-84a6-4eb2-a1ee-3b1634e09da2\") " pod="openshift-marketplace/certified-operators-rtqrm" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.879598 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cfe14027-5466-43b9-90a1-04ea55370210-config-data-generated\") pod \"openstack-galera-1\" (UID: \"cfe14027-5466-43b9-90a1-04ea55370210\") " pod="glance-kuttl-tests/openstack-galera-1" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.879633 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h69z2\" (UniqueName: \"kubernetes.io/projected/19b5d6fe-47e9-4816-907d-af0d46b556d2-kube-api-access-h69z2\") pod \"openstack-galera-2\" (UID: \"19b5d6fe-47e9-4816-907d-af0d46b556d2\") " pod="glance-kuttl-tests/openstack-galera-2" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.879662 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/19b5d6fe-47e9-4816-907d-af0d46b556d2-kolla-config\") pod \"openstack-galera-2\" (UID: \"19b5d6fe-47e9-4816-907d-af0d46b556d2\") " pod="glance-kuttl-tests/openstack-galera-2" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.879693 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfe14027-5466-43b9-90a1-04ea55370210-operator-scripts\") pod \"openstack-galera-1\" (UID: \"cfe14027-5466-43b9-90a1-04ea55370210\") " pod="glance-kuttl-tests/openstack-galera-1" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.879720 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"8d1d4ee6-5f01-4c7b-b326-6f4dc686022e\") " pod="glance-kuttl-tests/openstack-galera-0" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.879747 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7ae1462-84a6-4eb2-a1ee-3b1634e09da2-catalog-content\") pod \"certified-operators-rtqrm\" (UID: \"f7ae1462-84a6-4eb2-a1ee-3b1634e09da2\") " pod="openshift-marketplace/certified-operators-rtqrm" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.879812 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8hrd\" (UniqueName: \"kubernetes.io/projected/cfe14027-5466-43b9-90a1-04ea55370210-kube-api-access-q8hrd\") pod \"openstack-galera-1\" (UID: \"cfe14027-5466-43b9-90a1-04ea55370210\") " pod="glance-kuttl-tests/openstack-galera-1" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.879873 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zl5f\" (UniqueName: \"kubernetes.io/projected/f7ae1462-84a6-4eb2-a1ee-3b1634e09da2-kube-api-access-5zl5f\") pod \"certified-operators-rtqrm\" (UID: \"f7ae1462-84a6-4eb2-a1ee-3b1634e09da2\") " pod="openshift-marketplace/certified-operators-rtqrm" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.879913 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cfe14027-5466-43b9-90a1-04ea55370210-config-data-default\") pod \"openstack-galera-1\" (UID: \"cfe14027-5466-43b9-90a1-04ea55370210\") " pod="glance-kuttl-tests/openstack-galera-1" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.881509 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8d1d4ee6-5f01-4c7b-b326-6f4dc686022e-config-data-default\") pod \"openstack-galera-0\" (UID: \"8d1d4ee6-5f01-4c7b-b326-6f4dc686022e\") " pod="glance-kuttl-tests/openstack-galera-0" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.881519 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/19b5d6fe-47e9-4816-907d-af0d46b556d2-kolla-config\") pod \"openstack-galera-2\" (UID: \"19b5d6fe-47e9-4816-907d-af0d46b556d2\") " pod="glance-kuttl-tests/openstack-galera-2" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.881617 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"8d1d4ee6-5f01-4c7b-b326-6f4dc686022e\") device mount path \"/mnt/openstack/pv04\"" pod="glance-kuttl-tests/openstack-galera-0" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.882373 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d1d4ee6-5f01-4c7b-b326-6f4dc686022e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8d1d4ee6-5f01-4c7b-b326-6f4dc686022e\") " pod="glance-kuttl-tests/openstack-galera-0" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.882656 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cfe14027-5466-43b9-90a1-04ea55370210-kolla-config\") pod \"openstack-galera-1\" (UID: \"cfe14027-5466-43b9-90a1-04ea55370210\") " pod="glance-kuttl-tests/openstack-galera-1" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.883270 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/cfe14027-5466-43b9-90a1-04ea55370210-secrets\") pod \"openstack-galera-1\" (UID: \"cfe14027-5466-43b9-90a1-04ea55370210\") " pod="glance-kuttl-tests/openstack-galera-1" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.885853 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/8d1d4ee6-5f01-4c7b-b326-6f4dc686022e-secrets\") pod \"openstack-galera-0\" (UID: \"8d1d4ee6-5f01-4c7b-b326-6f4dc686022e\") " pod="glance-kuttl-tests/openstack-galera-0" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.886036 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cfe14027-5466-43b9-90a1-04ea55370210-config-data-generated\") pod \"openstack-galera-1\" (UID: \"cfe14027-5466-43b9-90a1-04ea55370210\") " pod="glance-kuttl-tests/openstack-galera-1" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.888398 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfe14027-5466-43b9-90a1-04ea55370210-operator-scripts\") pod \"openstack-galera-1\" (UID: \"cfe14027-5466-43b9-90a1-04ea55370210\") " pod="glance-kuttl-tests/openstack-galera-1" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.901993 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/19b5d6fe-47e9-4816-907d-af0d46b556d2-secrets\") pod \"openstack-galera-2\" (UID: \"19b5d6fe-47e9-4816-907d-af0d46b556d2\") " pod="glance-kuttl-tests/openstack-galera-2" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.914565 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h69z2\" (UniqueName: \"kubernetes.io/projected/19b5d6fe-47e9-4816-907d-af0d46b556d2-kube-api-access-h69z2\") pod \"openstack-galera-2\" (UID: \"19b5d6fe-47e9-4816-907d-af0d46b556d2\") " pod="glance-kuttl-tests/openstack-galera-2" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.919812 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7798p\" (UniqueName: \"kubernetes.io/projected/8d1d4ee6-5f01-4c7b-b326-6f4dc686022e-kube-api-access-7798p\") pod \"openstack-galera-0\" (UID: \"8d1d4ee6-5f01-4c7b-b326-6f4dc686022e\") " pod="glance-kuttl-tests/openstack-galera-0" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.928485 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8hrd\" (UniqueName: \"kubernetes.io/projected/cfe14027-5466-43b9-90a1-04ea55370210-kube-api-access-q8hrd\") pod \"openstack-galera-1\" (UID: \"cfe14027-5466-43b9-90a1-04ea55370210\") " pod="glance-kuttl-tests/openstack-galera-1" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.929867 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"8d1d4ee6-5f01-4c7b-b326-6f4dc686022e\") " pod="glance-kuttl-tests/openstack-galera-0" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.930762 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-1\" (UID: \"cfe14027-5466-43b9-90a1-04ea55370210\") " pod="glance-kuttl-tests/openstack-galera-1" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.943534 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-2\" (UID: \"19b5d6fe-47e9-4816-907d-af0d46b556d2\") " pod="glance-kuttl-tests/openstack-galera-2" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.947902 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-1" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.962071 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-2" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.981606 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7ae1462-84a6-4eb2-a1ee-3b1634e09da2-utilities\") pod \"certified-operators-rtqrm\" (UID: \"f7ae1462-84a6-4eb2-a1ee-3b1634e09da2\") " pod="openshift-marketplace/certified-operators-rtqrm" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.981678 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7ae1462-84a6-4eb2-a1ee-3b1634e09da2-catalog-content\") pod \"certified-operators-rtqrm\" (UID: \"f7ae1462-84a6-4eb2-a1ee-3b1634e09da2\") " pod="openshift-marketplace/certified-operators-rtqrm" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.981745 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zl5f\" (UniqueName: \"kubernetes.io/projected/f7ae1462-84a6-4eb2-a1ee-3b1634e09da2-kube-api-access-5zl5f\") pod \"certified-operators-rtqrm\" (UID: \"f7ae1462-84a6-4eb2-a1ee-3b1634e09da2\") " pod="openshift-marketplace/certified-operators-rtqrm" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.982208 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7ae1462-84a6-4eb2-a1ee-3b1634e09da2-utilities\") pod \"certified-operators-rtqrm\" (UID: \"f7ae1462-84a6-4eb2-a1ee-3b1634e09da2\") " pod="openshift-marketplace/certified-operators-rtqrm" Dec 13 04:01:46 crc kubenswrapper[4766]: I1213 04:01:46.982247 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7ae1462-84a6-4eb2-a1ee-3b1634e09da2-catalog-content\") pod \"certified-operators-rtqrm\" (UID: \"f7ae1462-84a6-4eb2-a1ee-3b1634e09da2\") " pod="openshift-marketplace/certified-operators-rtqrm" Dec 13 04:01:47 crc kubenswrapper[4766]: I1213 04:01:47.003967 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zl5f\" (UniqueName: \"kubernetes.io/projected/f7ae1462-84a6-4eb2-a1ee-3b1634e09da2-kube-api-access-5zl5f\") pod \"certified-operators-rtqrm\" (UID: \"f7ae1462-84a6-4eb2-a1ee-3b1634e09da2\") " pod="openshift-marketplace/certified-operators-rtqrm" Dec 13 04:01:47 crc kubenswrapper[4766]: I1213 04:01:47.213276 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-0" Dec 13 04:01:47 crc kubenswrapper[4766]: I1213 04:01:47.292522 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rtqrm" Dec 13 04:01:48 crc kubenswrapper[4766]: I1213 04:01:48.767212 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-1"] Dec 13 04:01:49 crc kubenswrapper[4766]: I1213 04:01:49.554724 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-1" event={"ID":"cfe14027-5466-43b9-90a1-04ea55370210","Type":"ContainerStarted","Data":"e4fa92f1418d1944c9811fd8a9cb73c2317c6044ee9bf8dec12e4da5081ea261"} Dec 13 04:01:49 crc kubenswrapper[4766]: I1213 04:01:49.591385 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-2"] Dec 13 04:01:49 crc kubenswrapper[4766]: W1213 04:01:49.597066 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19b5d6fe_47e9_4816_907d_af0d46b556d2.slice/crio-b60e6ff70e0479185d116df5a08b7682bcba3bf50307f19fb650ea8ff71ef90d WatchSource:0}: Error finding container b60e6ff70e0479185d116df5a08b7682bcba3bf50307f19fb650ea8ff71ef90d: Status 404 returned error can't find the container with id b60e6ff70e0479185d116df5a08b7682bcba3bf50307f19fb650ea8ff71ef90d Dec 13 04:01:49 crc kubenswrapper[4766]: I1213 04:01:49.705630 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rtqrm"] Dec 13 04:01:49 crc kubenswrapper[4766]: I1213 04:01:49.783372 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-0"] Dec 13 04:01:50 crc kubenswrapper[4766]: I1213 04:01:50.562864 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-0" event={"ID":"8d1d4ee6-5f01-4c7b-b326-6f4dc686022e","Type":"ContainerStarted","Data":"6f7505611a925594312dc64b2c1a9eb3c53293f7372b4961e11fd1334c955b78"} Dec 13 04:01:50 crc kubenswrapper[4766]: I1213 04:01:50.565031 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-8cbcb47f-6sh6l" event={"ID":"7de0f18b-5607-4711-ace6-6a841491a182","Type":"ContainerStarted","Data":"692a72b4ed418101ae27b6f357200f9307b555ce6c57ae7f5415d68b7df5191b"} Dec 13 04:01:50 crc kubenswrapper[4766]: I1213 04:01:50.565078 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-8cbcb47f-6sh6l" event={"ID":"7de0f18b-5607-4711-ace6-6a841491a182","Type":"ContainerStarted","Data":"7b30944511799283f1b49b7cd3048ecbc4372ec3eab7cd3d6f9a65d716a5dd1a"} Dec 13 04:01:50 crc kubenswrapper[4766]: I1213 04:01:50.565188 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-8cbcb47f-6sh6l" Dec 13 04:01:50 crc kubenswrapper[4766]: I1213 04:01:50.567421 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-2" event={"ID":"19b5d6fe-47e9-4816-907d-af0d46b556d2","Type":"ContainerStarted","Data":"b60e6ff70e0479185d116df5a08b7682bcba3bf50307f19fb650ea8ff71ef90d"} Dec 13 04:01:50 crc kubenswrapper[4766]: I1213 04:01:50.570067 4766 generic.go:334] "Generic (PLEG): container finished" podID="f7ae1462-84a6-4eb2-a1ee-3b1634e09da2" containerID="29040dff5be67fa0e726a02ce4e2c5798faecea3db88b16ca89035c16b0e5702" exitCode=0 Dec 13 04:01:50 crc kubenswrapper[4766]: I1213 04:01:50.570340 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rtqrm" event={"ID":"f7ae1462-84a6-4eb2-a1ee-3b1634e09da2","Type":"ContainerDied","Data":"29040dff5be67fa0e726a02ce4e2c5798faecea3db88b16ca89035c16b0e5702"} Dec 13 04:01:50 crc kubenswrapper[4766]: I1213 04:01:50.570592 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rtqrm" event={"ID":"f7ae1462-84a6-4eb2-a1ee-3b1634e09da2","Type":"ContainerStarted","Data":"5f712f76176415d877f0007b47f600048f23a0206c1375ad4432501d8bccaa6e"} Dec 13 04:01:50 crc kubenswrapper[4766]: I1213 04:01:50.596328 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-8cbcb47f-6sh6l" podStartSLOduration=3.048939159 podStartE2EDuration="8.596309816s" podCreationTimestamp="2025-12-13 04:01:42 +0000 UTC" firstStartedPulling="2025-12-13 04:01:43.861652884 +0000 UTC m=+1035.371585838" lastFinishedPulling="2025-12-13 04:01:49.409023531 +0000 UTC m=+1040.918956495" observedRunningTime="2025-12-13 04:01:50.59540141 +0000 UTC m=+1042.105334374" watchObservedRunningTime="2025-12-13 04:01:50.596309816 +0000 UTC m=+1042.106242770" Dec 13 04:01:51 crc kubenswrapper[4766]: I1213 04:01:51.581761 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rtqrm" event={"ID":"f7ae1462-84a6-4eb2-a1ee-3b1634e09da2","Type":"ContainerStarted","Data":"bfd19ab6c0350c8971d1ff76b4a1a636cc8a6b508bcfeb03abaf92769067b59c"} Dec 13 04:01:52 crc kubenswrapper[4766]: I1213 04:01:52.590877 4766 generic.go:334] "Generic (PLEG): container finished" podID="f7ae1462-84a6-4eb2-a1ee-3b1634e09da2" containerID="bfd19ab6c0350c8971d1ff76b4a1a636cc8a6b508bcfeb03abaf92769067b59c" exitCode=0 Dec 13 04:01:52 crc kubenswrapper[4766]: I1213 04:01:52.591193 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rtqrm" event={"ID":"f7ae1462-84a6-4eb2-a1ee-3b1634e09da2","Type":"ContainerDied","Data":"bfd19ab6c0350c8971d1ff76b4a1a636cc8a6b508bcfeb03abaf92769067b59c"} Dec 13 04:02:00 crc kubenswrapper[4766]: I1213 04:02:00.646900 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-1" event={"ID":"cfe14027-5466-43b9-90a1-04ea55370210","Type":"ContainerStarted","Data":"270775bd2ec1cf1ff9929862991007924740727b0dc79d3c8af72a86bcb3a324"} Dec 13 04:02:00 crc kubenswrapper[4766]: I1213 04:02:00.648996 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-0" event={"ID":"8d1d4ee6-5f01-4c7b-b326-6f4dc686022e","Type":"ContainerStarted","Data":"d377101a33abd7a68341b0bcf2e90e4796f716e273c6127df54d62ee090bd7bd"} Dec 13 04:02:00 crc kubenswrapper[4766]: I1213 04:02:00.658210 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-2" event={"ID":"19b5d6fe-47e9-4816-907d-af0d46b556d2","Type":"ContainerStarted","Data":"3a1ab7e94f19c464c1a2b4473c3e99a2dc35fb07e6f81522cc57002e03b2c789"} Dec 13 04:02:00 crc kubenswrapper[4766]: I1213 04:02:00.661706 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rtqrm" event={"ID":"f7ae1462-84a6-4eb2-a1ee-3b1634e09da2","Type":"ContainerStarted","Data":"1011c65e645514dbe0dc8faf79556c6a8485b99910dcbd98a457c027cf44fc5f"} Dec 13 04:02:00 crc kubenswrapper[4766]: I1213 04:02:00.702579 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rtqrm" podStartSLOduration=5.8559508959999995 podStartE2EDuration="14.702558839s" podCreationTimestamp="2025-12-13 04:01:46 +0000 UTC" firstStartedPulling="2025-12-13 04:01:50.573092654 +0000 UTC m=+1042.083025618" lastFinishedPulling="2025-12-13 04:01:59.419700597 +0000 UTC m=+1050.929633561" observedRunningTime="2025-12-13 04:02:00.699921332 +0000 UTC m=+1052.209854296" watchObservedRunningTime="2025-12-13 04:02:00.702558839 +0000 UTC m=+1052.212491793" Dec 13 04:02:03 crc kubenswrapper[4766]: I1213 04:02:03.379642 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-8cbcb47f-6sh6l" Dec 13 04:02:05 crc kubenswrapper[4766]: I1213 04:02:05.692830 4766 generic.go:334] "Generic (PLEG): container finished" podID="8d1d4ee6-5f01-4c7b-b326-6f4dc686022e" containerID="d377101a33abd7a68341b0bcf2e90e4796f716e273c6127df54d62ee090bd7bd" exitCode=0 Dec 13 04:02:05 crc kubenswrapper[4766]: I1213 04:02:05.692943 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-0" event={"ID":"8d1d4ee6-5f01-4c7b-b326-6f4dc686022e","Type":"ContainerDied","Data":"d377101a33abd7a68341b0bcf2e90e4796f716e273c6127df54d62ee090bd7bd"} Dec 13 04:02:05 crc kubenswrapper[4766]: I1213 04:02:05.694976 4766 generic.go:334] "Generic (PLEG): container finished" podID="19b5d6fe-47e9-4816-907d-af0d46b556d2" containerID="3a1ab7e94f19c464c1a2b4473c3e99a2dc35fb07e6f81522cc57002e03b2c789" exitCode=0 Dec 13 04:02:05 crc kubenswrapper[4766]: I1213 04:02:05.695057 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-2" event={"ID":"19b5d6fe-47e9-4816-907d-af0d46b556d2","Type":"ContainerDied","Data":"3a1ab7e94f19c464c1a2b4473c3e99a2dc35fb07e6f81522cc57002e03b2c789"} Dec 13 04:02:05 crc kubenswrapper[4766]: I1213 04:02:05.697023 4766 generic.go:334] "Generic (PLEG): container finished" podID="cfe14027-5466-43b9-90a1-04ea55370210" containerID="270775bd2ec1cf1ff9929862991007924740727b0dc79d3c8af72a86bcb3a324" exitCode=0 Dec 13 04:02:05 crc kubenswrapper[4766]: I1213 04:02:05.697059 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-1" event={"ID":"cfe14027-5466-43b9-90a1-04ea55370210","Type":"ContainerDied","Data":"270775bd2ec1cf1ff9929862991007924740727b0dc79d3c8af72a86bcb3a324"} Dec 13 04:02:05 crc kubenswrapper[4766]: I1213 04:02:05.928965 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/memcached-0"] Dec 13 04:02:05 crc kubenswrapper[4766]: I1213 04:02:05.929866 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/memcached-0" Dec 13 04:02:05 crc kubenswrapper[4766]: I1213 04:02:05.933692 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"memcached-memcached-dockercfg-vxhzb" Dec 13 04:02:05 crc kubenswrapper[4766]: I1213 04:02:05.933922 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"memcached-config-data" Dec 13 04:02:05 crc kubenswrapper[4766]: I1213 04:02:05.942244 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/memcached-0"] Dec 13 04:02:06 crc kubenswrapper[4766]: I1213 04:02:06.004653 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/490a8c60-a83e-44ca-93ff-e5b802d5d20a-config-data\") pod \"memcached-0\" (UID: \"490a8c60-a83e-44ca-93ff-e5b802d5d20a\") " pod="glance-kuttl-tests/memcached-0" Dec 13 04:02:06 crc kubenswrapper[4766]: I1213 04:02:06.004735 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g87w9\" (UniqueName: \"kubernetes.io/projected/490a8c60-a83e-44ca-93ff-e5b802d5d20a-kube-api-access-g87w9\") pod \"memcached-0\" (UID: \"490a8c60-a83e-44ca-93ff-e5b802d5d20a\") " pod="glance-kuttl-tests/memcached-0" Dec 13 04:02:06 crc kubenswrapper[4766]: I1213 04:02:06.004792 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/490a8c60-a83e-44ca-93ff-e5b802d5d20a-kolla-config\") pod \"memcached-0\" (UID: \"490a8c60-a83e-44ca-93ff-e5b802d5d20a\") " pod="glance-kuttl-tests/memcached-0" Dec 13 04:02:06 crc kubenswrapper[4766]: I1213 04:02:06.106610 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/490a8c60-a83e-44ca-93ff-e5b802d5d20a-kolla-config\") pod \"memcached-0\" (UID: \"490a8c60-a83e-44ca-93ff-e5b802d5d20a\") " pod="glance-kuttl-tests/memcached-0" Dec 13 04:02:06 crc kubenswrapper[4766]: I1213 04:02:06.106701 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/490a8c60-a83e-44ca-93ff-e5b802d5d20a-config-data\") pod \"memcached-0\" (UID: \"490a8c60-a83e-44ca-93ff-e5b802d5d20a\") " pod="glance-kuttl-tests/memcached-0" Dec 13 04:02:06 crc kubenswrapper[4766]: I1213 04:02:06.106761 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g87w9\" (UniqueName: \"kubernetes.io/projected/490a8c60-a83e-44ca-93ff-e5b802d5d20a-kube-api-access-g87w9\") pod \"memcached-0\" (UID: \"490a8c60-a83e-44ca-93ff-e5b802d5d20a\") " pod="glance-kuttl-tests/memcached-0" Dec 13 04:02:06 crc kubenswrapper[4766]: I1213 04:02:06.107682 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/490a8c60-a83e-44ca-93ff-e5b802d5d20a-config-data\") pod \"memcached-0\" (UID: \"490a8c60-a83e-44ca-93ff-e5b802d5d20a\") " pod="glance-kuttl-tests/memcached-0" Dec 13 04:02:06 crc kubenswrapper[4766]: I1213 04:02:06.107786 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/490a8c60-a83e-44ca-93ff-e5b802d5d20a-kolla-config\") pod \"memcached-0\" (UID: \"490a8c60-a83e-44ca-93ff-e5b802d5d20a\") " pod="glance-kuttl-tests/memcached-0" Dec 13 04:02:06 crc kubenswrapper[4766]: I1213 04:02:06.136362 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g87w9\" (UniqueName: \"kubernetes.io/projected/490a8c60-a83e-44ca-93ff-e5b802d5d20a-kube-api-access-g87w9\") pod \"memcached-0\" (UID: \"490a8c60-a83e-44ca-93ff-e5b802d5d20a\") " pod="glance-kuttl-tests/memcached-0" Dec 13 04:02:06 crc kubenswrapper[4766]: I1213 04:02:06.250915 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/memcached-0" Dec 13 04:02:06 crc kubenswrapper[4766]: I1213 04:02:06.679671 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/memcached-0"] Dec 13 04:02:06 crc kubenswrapper[4766]: I1213 04:02:06.706225 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-1" event={"ID":"cfe14027-5466-43b9-90a1-04ea55370210","Type":"ContainerStarted","Data":"34110a46e858604a83c581496ad93e51246216d620d8ad0c8d8c12160284c7e5"} Dec 13 04:02:06 crc kubenswrapper[4766]: I1213 04:02:06.716475 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-0" event={"ID":"8d1d4ee6-5f01-4c7b-b326-6f4dc686022e","Type":"ContainerStarted","Data":"d6e63e200847735fb6274098e5e5691bd5bd903336c74a51036bf62fc748a964"} Dec 13 04:02:06 crc kubenswrapper[4766]: I1213 04:02:06.718685 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/memcached-0" event={"ID":"490a8c60-a83e-44ca-93ff-e5b802d5d20a","Type":"ContainerStarted","Data":"98de58010da234fa771f485ee94c23da6b20502da2ea6baeac1fb62cedd87e7f"} Dec 13 04:02:06 crc kubenswrapper[4766]: I1213 04:02:06.720155 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-2" event={"ID":"19b5d6fe-47e9-4816-907d-af0d46b556d2","Type":"ContainerStarted","Data":"0b87c9570082beb4df98e6f365332739e1ae7dd46875bbb29727123444ec5b09"} Dec 13 04:02:06 crc kubenswrapper[4766]: I1213 04:02:06.732706 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/openstack-galera-1" podStartSLOduration=11.559392795 podStartE2EDuration="21.732690858s" podCreationTimestamp="2025-12-13 04:01:45 +0000 UTC" firstStartedPulling="2025-12-13 04:01:49.273464577 +0000 UTC m=+1040.783397541" lastFinishedPulling="2025-12-13 04:01:59.44676264 +0000 UTC m=+1050.956695604" observedRunningTime="2025-12-13 04:02:06.730097793 +0000 UTC m=+1058.240030757" watchObservedRunningTime="2025-12-13 04:02:06.732690858 +0000 UTC m=+1058.242623822" Dec 13 04:02:06 crc kubenswrapper[4766]: I1213 04:02:06.753447 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/openstack-galera-2" podStartSLOduration=11.934494014 podStartE2EDuration="21.753406238s" podCreationTimestamp="2025-12-13 04:01:45 +0000 UTC" firstStartedPulling="2025-12-13 04:01:49.59933482 +0000 UTC m=+1041.109267784" lastFinishedPulling="2025-12-13 04:01:59.418247044 +0000 UTC m=+1050.928180008" observedRunningTime="2025-12-13 04:02:06.750289828 +0000 UTC m=+1058.260222812" watchObservedRunningTime="2025-12-13 04:02:06.753406238 +0000 UTC m=+1058.263339202" Dec 13 04:02:06 crc kubenswrapper[4766]: I1213 04:02:06.779501 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/openstack-galera-0" podStartSLOduration=12.155426499 podStartE2EDuration="21.779482753s" podCreationTimestamp="2025-12-13 04:01:45 +0000 UTC" firstStartedPulling="2025-12-13 04:01:49.795119217 +0000 UTC m=+1041.305052181" lastFinishedPulling="2025-12-13 04:01:59.419175471 +0000 UTC m=+1050.929108435" observedRunningTime="2025-12-13 04:02:06.775649902 +0000 UTC m=+1058.285582876" watchObservedRunningTime="2025-12-13 04:02:06.779482753 +0000 UTC m=+1058.289415717" Dec 13 04:02:06 crc kubenswrapper[4766]: I1213 04:02:06.948298 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/openstack-galera-1" Dec 13 04:02:06 crc kubenswrapper[4766]: I1213 04:02:06.948368 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/openstack-galera-1" Dec 13 04:02:06 crc kubenswrapper[4766]: I1213 04:02:06.963046 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/openstack-galera-2" Dec 13 04:02:06 crc kubenswrapper[4766]: I1213 04:02:06.963111 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/openstack-galera-2" Dec 13 04:02:07 crc kubenswrapper[4766]: I1213 04:02:07.214653 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/openstack-galera-0" Dec 13 04:02:07 crc kubenswrapper[4766]: I1213 04:02:07.214724 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/openstack-galera-0" Dec 13 04:02:07 crc kubenswrapper[4766]: I1213 04:02:07.293775 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rtqrm" Dec 13 04:02:07 crc kubenswrapper[4766]: I1213 04:02:07.306234 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rtqrm" Dec 13 04:02:07 crc kubenswrapper[4766]: I1213 04:02:07.359214 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rtqrm" Dec 13 04:02:07 crc kubenswrapper[4766]: I1213 04:02:07.789277 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rtqrm" Dec 13 04:02:09 crc kubenswrapper[4766]: E1213 04:02:09.122485 4766 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.212:51114->38.102.83.212:39905: write tcp 38.102.83.212:51114->38.102.83.212:39905: write: broken pipe Dec 13 04:02:09 crc kubenswrapper[4766]: E1213 04:02:09.168354 4766 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.212:51126->38.102.83.212:39905: write tcp 38.102.83.212:51126->38.102.83.212:39905: write: broken pipe Dec 13 04:02:09 crc kubenswrapper[4766]: I1213 04:02:09.215669 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-66djs"] Dec 13 04:02:09 crc kubenswrapper[4766]: I1213 04:02:09.218473 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-66djs" Dec 13 04:02:09 crc kubenswrapper[4766]: I1213 04:02:09.225077 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-66djs"] Dec 13 04:02:09 crc kubenswrapper[4766]: I1213 04:02:09.255535 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-index-dockercfg-rtfnr" Dec 13 04:02:09 crc kubenswrapper[4766]: I1213 04:02:09.260370 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nfn9\" (UniqueName: \"kubernetes.io/projected/d24473b2-c2f4-4ee9-8389-390a9b05d0ca-kube-api-access-5nfn9\") pod \"rabbitmq-cluster-operator-index-66djs\" (UID: \"d24473b2-c2f4-4ee9-8389-390a9b05d0ca\") " pod="openstack-operators/rabbitmq-cluster-operator-index-66djs" Dec 13 04:02:09 crc kubenswrapper[4766]: I1213 04:02:09.362735 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nfn9\" (UniqueName: \"kubernetes.io/projected/d24473b2-c2f4-4ee9-8389-390a9b05d0ca-kube-api-access-5nfn9\") pod \"rabbitmq-cluster-operator-index-66djs\" (UID: \"d24473b2-c2f4-4ee9-8389-390a9b05d0ca\") " pod="openstack-operators/rabbitmq-cluster-operator-index-66djs" Dec 13 04:02:09 crc kubenswrapper[4766]: I1213 04:02:09.385531 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nfn9\" (UniqueName: \"kubernetes.io/projected/d24473b2-c2f4-4ee9-8389-390a9b05d0ca-kube-api-access-5nfn9\") pod \"rabbitmq-cluster-operator-index-66djs\" (UID: \"d24473b2-c2f4-4ee9-8389-390a9b05d0ca\") " pod="openstack-operators/rabbitmq-cluster-operator-index-66djs" Dec 13 04:02:09 crc kubenswrapper[4766]: I1213 04:02:09.576896 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-66djs" Dec 13 04:02:09 crc kubenswrapper[4766]: I1213 04:02:09.733077 4766 patch_prober.go:28] interesting pod/machine-config-daemon-94w9l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 04:02:09 crc kubenswrapper[4766]: I1213 04:02:09.733493 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 04:02:09 crc kubenswrapper[4766]: I1213 04:02:09.733544 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" Dec 13 04:02:09 crc kubenswrapper[4766]: I1213 04:02:09.734242 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f383501fdc030c21904f55bdc8f043a9f6b6848b9b670e0fc026beaf3079e7c"} pod="openshift-machine-config-operator/machine-config-daemon-94w9l" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 13 04:02:09 crc kubenswrapper[4766]: I1213 04:02:09.734311 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" containerID="cri-o://6f383501fdc030c21904f55bdc8f043a9f6b6848b9b670e0fc026beaf3079e7c" gracePeriod=600 Dec 13 04:02:10 crc kubenswrapper[4766]: I1213 04:02:10.038843 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-66djs"] Dec 13 04:02:10 crc kubenswrapper[4766]: W1213 04:02:10.047636 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd24473b2_c2f4_4ee9_8389_390a9b05d0ca.slice/crio-b4f6a53de4b55f544c3ab1aad23716ccd4715f4956b0a809bd4118ac968af768 WatchSource:0}: Error finding container b4f6a53de4b55f544c3ab1aad23716ccd4715f4956b0a809bd4118ac968af768: Status 404 returned error can't find the container with id b4f6a53de4b55f544c3ab1aad23716ccd4715f4956b0a809bd4118ac968af768 Dec 13 04:02:10 crc kubenswrapper[4766]: I1213 04:02:10.752205 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-66djs" event={"ID":"d24473b2-c2f4-4ee9-8389-390a9b05d0ca","Type":"ContainerStarted","Data":"b4f6a53de4b55f544c3ab1aad23716ccd4715f4956b0a809bd4118ac968af768"} Dec 13 04:02:10 crc kubenswrapper[4766]: I1213 04:02:10.754532 4766 generic.go:334] "Generic (PLEG): container finished" podID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerID="6f383501fdc030c21904f55bdc8f043a9f6b6848b9b670e0fc026beaf3079e7c" exitCode=0 Dec 13 04:02:10 crc kubenswrapper[4766]: I1213 04:02:10.754578 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" event={"ID":"71e6a48b-4f5d-4299-9c7b-98dbe11e670e","Type":"ContainerDied","Data":"6f383501fdc030c21904f55bdc8f043a9f6b6848b9b670e0fc026beaf3079e7c"} Dec 13 04:02:10 crc kubenswrapper[4766]: I1213 04:02:10.754620 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" event={"ID":"71e6a48b-4f5d-4299-9c7b-98dbe11e670e","Type":"ContainerStarted","Data":"dc36ef313dc384ef79916b9e90a1008e5893b470ee70f62a783fefe019c14e1a"} Dec 13 04:02:10 crc kubenswrapper[4766]: I1213 04:02:10.754650 4766 scope.go:117] "RemoveContainer" containerID="f2151a06f5707b72e56cc0032442b9fe647442317230e16d90226a37ee92ba85" Dec 13 04:02:11 crc kubenswrapper[4766]: I1213 04:02:11.808388 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rtqrm"] Dec 13 04:02:11 crc kubenswrapper[4766]: I1213 04:02:11.808932 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rtqrm" podUID="f7ae1462-84a6-4eb2-a1ee-3b1634e09da2" containerName="registry-server" containerID="cri-o://1011c65e645514dbe0dc8faf79556c6a8485b99910dcbd98a457c027cf44fc5f" gracePeriod=2 Dec 13 04:02:12 crc kubenswrapper[4766]: I1213 04:02:12.779506 4766 generic.go:334] "Generic (PLEG): container finished" podID="f7ae1462-84a6-4eb2-a1ee-3b1634e09da2" containerID="1011c65e645514dbe0dc8faf79556c6a8485b99910dcbd98a457c027cf44fc5f" exitCode=0 Dec 13 04:02:12 crc kubenswrapper[4766]: I1213 04:02:12.779615 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rtqrm" event={"ID":"f7ae1462-84a6-4eb2-a1ee-3b1634e09da2","Type":"ContainerDied","Data":"1011c65e645514dbe0dc8faf79556c6a8485b99910dcbd98a457c027cf44fc5f"} Dec 13 04:02:12 crc kubenswrapper[4766]: E1213 04:02:12.826928 4766 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.212:51176->38.102.83.212:39905: read tcp 38.102.83.212:51176->38.102.83.212:39905: read: connection reset by peer Dec 13 04:02:15 crc kubenswrapper[4766]: I1213 04:02:15.049214 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/openstack-galera-2" Dec 13 04:02:15 crc kubenswrapper[4766]: I1213 04:02:15.111749 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/openstack-galera-2" Dec 13 04:02:15 crc kubenswrapper[4766]: I1213 04:02:15.215699 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-66djs"] Dec 13 04:02:16 crc kubenswrapper[4766]: I1213 04:02:16.021004 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-b8rfq"] Dec 13 04:02:16 crc kubenswrapper[4766]: I1213 04:02:16.022502 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-b8rfq" Dec 13 04:02:16 crc kubenswrapper[4766]: I1213 04:02:16.105447 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-b8rfq"] Dec 13 04:02:16 crc kubenswrapper[4766]: I1213 04:02:16.147584 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmx7f\" (UniqueName: \"kubernetes.io/projected/c7ffb6ae-a1fa-4f11-b908-9e42901c8c25-kube-api-access-dmx7f\") pod \"rabbitmq-cluster-operator-index-b8rfq\" (UID: \"c7ffb6ae-a1fa-4f11-b908-9e42901c8c25\") " pod="openstack-operators/rabbitmq-cluster-operator-index-b8rfq" Dec 13 04:02:16 crc kubenswrapper[4766]: I1213 04:02:16.244401 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rtqrm" Dec 13 04:02:16 crc kubenswrapper[4766]: I1213 04:02:16.248797 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmx7f\" (UniqueName: \"kubernetes.io/projected/c7ffb6ae-a1fa-4f11-b908-9e42901c8c25-kube-api-access-dmx7f\") pod \"rabbitmq-cluster-operator-index-b8rfq\" (UID: \"c7ffb6ae-a1fa-4f11-b908-9e42901c8c25\") " pod="openstack-operators/rabbitmq-cluster-operator-index-b8rfq" Dec 13 04:02:16 crc kubenswrapper[4766]: I1213 04:02:16.278253 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmx7f\" (UniqueName: \"kubernetes.io/projected/c7ffb6ae-a1fa-4f11-b908-9e42901c8c25-kube-api-access-dmx7f\") pod \"rabbitmq-cluster-operator-index-b8rfq\" (UID: \"c7ffb6ae-a1fa-4f11-b908-9e42901c8c25\") " pod="openstack-operators/rabbitmq-cluster-operator-index-b8rfq" Dec 13 04:02:16 crc kubenswrapper[4766]: I1213 04:02:16.347111 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-b8rfq" Dec 13 04:02:16 crc kubenswrapper[4766]: I1213 04:02:16.349420 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7ae1462-84a6-4eb2-a1ee-3b1634e09da2-catalog-content\") pod \"f7ae1462-84a6-4eb2-a1ee-3b1634e09da2\" (UID: \"f7ae1462-84a6-4eb2-a1ee-3b1634e09da2\") " Dec 13 04:02:16 crc kubenswrapper[4766]: I1213 04:02:16.349510 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7ae1462-84a6-4eb2-a1ee-3b1634e09da2-utilities\") pod \"f7ae1462-84a6-4eb2-a1ee-3b1634e09da2\" (UID: \"f7ae1462-84a6-4eb2-a1ee-3b1634e09da2\") " Dec 13 04:02:16 crc kubenswrapper[4766]: I1213 04:02:16.349604 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zl5f\" (UniqueName: \"kubernetes.io/projected/f7ae1462-84a6-4eb2-a1ee-3b1634e09da2-kube-api-access-5zl5f\") pod \"f7ae1462-84a6-4eb2-a1ee-3b1634e09da2\" (UID: \"f7ae1462-84a6-4eb2-a1ee-3b1634e09da2\") " Dec 13 04:02:16 crc kubenswrapper[4766]: I1213 04:02:16.356595 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7ae1462-84a6-4eb2-a1ee-3b1634e09da2-utilities" (OuterVolumeSpecName: "utilities") pod "f7ae1462-84a6-4eb2-a1ee-3b1634e09da2" (UID: "f7ae1462-84a6-4eb2-a1ee-3b1634e09da2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:02:16 crc kubenswrapper[4766]: I1213 04:02:16.362848 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7ae1462-84a6-4eb2-a1ee-3b1634e09da2-kube-api-access-5zl5f" (OuterVolumeSpecName: "kube-api-access-5zl5f") pod "f7ae1462-84a6-4eb2-a1ee-3b1634e09da2" (UID: "f7ae1462-84a6-4eb2-a1ee-3b1634e09da2"). InnerVolumeSpecName "kube-api-access-5zl5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:02:16 crc kubenswrapper[4766]: I1213 04:02:16.430083 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7ae1462-84a6-4eb2-a1ee-3b1634e09da2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7ae1462-84a6-4eb2-a1ee-3b1634e09da2" (UID: "f7ae1462-84a6-4eb2-a1ee-3b1634e09da2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:02:16 crc kubenswrapper[4766]: I1213 04:02:16.451289 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7ae1462-84a6-4eb2-a1ee-3b1634e09da2-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 04:02:16 crc kubenswrapper[4766]: I1213 04:02:16.451339 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zl5f\" (UniqueName: \"kubernetes.io/projected/f7ae1462-84a6-4eb2-a1ee-3b1634e09da2-kube-api-access-5zl5f\") on node \"crc\" DevicePath \"\"" Dec 13 04:02:16 crc kubenswrapper[4766]: I1213 04:02:16.451353 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7ae1462-84a6-4eb2-a1ee-3b1634e09da2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 04:02:16 crc kubenswrapper[4766]: I1213 04:02:16.821062 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rtqrm" event={"ID":"f7ae1462-84a6-4eb2-a1ee-3b1634e09da2","Type":"ContainerDied","Data":"5f712f76176415d877f0007b47f600048f23a0206c1375ad4432501d8bccaa6e"} Dec 13 04:02:16 crc kubenswrapper[4766]: I1213 04:02:16.821421 4766 scope.go:117] "RemoveContainer" containerID="1011c65e645514dbe0dc8faf79556c6a8485b99910dcbd98a457c027cf44fc5f" Dec 13 04:02:16 crc kubenswrapper[4766]: I1213 04:02:16.821095 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rtqrm" Dec 13 04:02:16 crc kubenswrapper[4766]: I1213 04:02:16.823631 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/memcached-0" event={"ID":"490a8c60-a83e-44ca-93ff-e5b802d5d20a","Type":"ContainerStarted","Data":"6509514e5f72a0fdcb241fefa88e78452eadfa9002332e9bad6a436e861062ad"} Dec 13 04:02:16 crc kubenswrapper[4766]: I1213 04:02:16.824617 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/memcached-0" Dec 13 04:02:16 crc kubenswrapper[4766]: I1213 04:02:16.860813 4766 scope.go:117] "RemoveContainer" containerID="bfd19ab6c0350c8971d1ff76b4a1a636cc8a6b508bcfeb03abaf92769067b59c" Dec 13 04:02:16 crc kubenswrapper[4766]: I1213 04:02:16.864603 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/memcached-0" podStartSLOduration=2.339915835 podStartE2EDuration="11.864578481s" podCreationTimestamp="2025-12-13 04:02:05 +0000 UTC" firstStartedPulling="2025-12-13 04:02:06.687958314 +0000 UTC m=+1058.197891278" lastFinishedPulling="2025-12-13 04:02:16.21262096 +0000 UTC m=+1067.722553924" observedRunningTime="2025-12-13 04:02:16.861278686 +0000 UTC m=+1068.371211650" watchObservedRunningTime="2025-12-13 04:02:16.864578481 +0000 UTC m=+1068.374511445" Dec 13 04:02:16 crc kubenswrapper[4766]: I1213 04:02:16.896569 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rtqrm"] Dec 13 04:02:16 crc kubenswrapper[4766]: I1213 04:02:16.902508 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-b8rfq"] Dec 13 04:02:16 crc kubenswrapper[4766]: I1213 04:02:16.906845 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rtqrm"] Dec 13 04:02:16 crc kubenswrapper[4766]: I1213 04:02:16.912712 4766 scope.go:117] "RemoveContainer" containerID="29040dff5be67fa0e726a02ce4e2c5798faecea3db88b16ca89035c16b0e5702" Dec 13 04:02:17 crc kubenswrapper[4766]: I1213 04:02:17.626534 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7ae1462-84a6-4eb2-a1ee-3b1634e09da2" path="/var/lib/kubelet/pods/f7ae1462-84a6-4eb2-a1ee-3b1634e09da2/volumes" Dec 13 04:02:17 crc kubenswrapper[4766]: I1213 04:02:17.831440 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-b8rfq" event={"ID":"c7ffb6ae-a1fa-4f11-b908-9e42901c8c25","Type":"ContainerStarted","Data":"921d3f04b8b8295d53a6d7bc5fed39b3f4d290c3165a512f640b684aaf8fbd68"} Dec 13 04:02:21 crc kubenswrapper[4766]: I1213 04:02:21.253701 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/memcached-0" Dec 13 04:02:23 crc kubenswrapper[4766]: I1213 04:02:23.886500 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-b8rfq" event={"ID":"c7ffb6ae-a1fa-4f11-b908-9e42901c8c25","Type":"ContainerStarted","Data":"7990f78ef4a2e2cfd59d5af1ba53973c7ac04c37d4c97800c8f5d01483fd164f"} Dec 13 04:02:23 crc kubenswrapper[4766]: I1213 04:02:23.888018 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-66djs" event={"ID":"d24473b2-c2f4-4ee9-8389-390a9b05d0ca","Type":"ContainerStarted","Data":"4b7c6e80fc585c0088d81b4d5ba794533cf78f0e4f806a9af0489a7e267738b3"} Dec 13 04:02:23 crc kubenswrapper[4766]: I1213 04:02:23.888114 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-index-66djs" podUID="d24473b2-c2f4-4ee9-8389-390a9b05d0ca" containerName="registry-server" containerID="cri-o://4b7c6e80fc585c0088d81b4d5ba794533cf78f0e4f806a9af0489a7e267738b3" gracePeriod=2 Dec 13 04:02:23 crc kubenswrapper[4766]: I1213 04:02:23.914117 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-index-b8rfq" podStartSLOduration=3.089347562 podStartE2EDuration="8.914093537s" podCreationTimestamp="2025-12-13 04:02:15 +0000 UTC" firstStartedPulling="2025-12-13 04:02:16.912854688 +0000 UTC m=+1068.422787652" lastFinishedPulling="2025-12-13 04:02:22.737600663 +0000 UTC m=+1074.247533627" observedRunningTime="2025-12-13 04:02:23.903249243 +0000 UTC m=+1075.413182207" watchObservedRunningTime="2025-12-13 04:02:23.914093537 +0000 UTC m=+1075.424026501" Dec 13 04:02:23 crc kubenswrapper[4766]: I1213 04:02:23.925645 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-index-66djs" podStartSLOduration=2.239084655 podStartE2EDuration="14.925623941s" podCreationTimestamp="2025-12-13 04:02:09 +0000 UTC" firstStartedPulling="2025-12-13 04:02:10.05114808 +0000 UTC m=+1061.561081044" lastFinishedPulling="2025-12-13 04:02:22.737687366 +0000 UTC m=+1074.247620330" observedRunningTime="2025-12-13 04:02:23.92214888 +0000 UTC m=+1075.432081844" watchObservedRunningTime="2025-12-13 04:02:23.925623941 +0000 UTC m=+1075.435556905" Dec 13 04:02:24 crc kubenswrapper[4766]: I1213 04:02:24.347530 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-66djs" Dec 13 04:02:24 crc kubenswrapper[4766]: I1213 04:02:24.457673 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nfn9\" (UniqueName: \"kubernetes.io/projected/d24473b2-c2f4-4ee9-8389-390a9b05d0ca-kube-api-access-5nfn9\") pod \"d24473b2-c2f4-4ee9-8389-390a9b05d0ca\" (UID: \"d24473b2-c2f4-4ee9-8389-390a9b05d0ca\") " Dec 13 04:02:24 crc kubenswrapper[4766]: I1213 04:02:24.464725 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d24473b2-c2f4-4ee9-8389-390a9b05d0ca-kube-api-access-5nfn9" (OuterVolumeSpecName: "kube-api-access-5nfn9") pod "d24473b2-c2f4-4ee9-8389-390a9b05d0ca" (UID: "d24473b2-c2f4-4ee9-8389-390a9b05d0ca"). InnerVolumeSpecName "kube-api-access-5nfn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:02:24 crc kubenswrapper[4766]: I1213 04:02:24.559917 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nfn9\" (UniqueName: \"kubernetes.io/projected/d24473b2-c2f4-4ee9-8389-390a9b05d0ca-kube-api-access-5nfn9\") on node \"crc\" DevicePath \"\"" Dec 13 04:02:24 crc kubenswrapper[4766]: I1213 04:02:24.896151 4766 generic.go:334] "Generic (PLEG): container finished" podID="d24473b2-c2f4-4ee9-8389-390a9b05d0ca" containerID="4b7c6e80fc585c0088d81b4d5ba794533cf78f0e4f806a9af0489a7e267738b3" exitCode=0 Dec 13 04:02:24 crc kubenswrapper[4766]: I1213 04:02:24.896191 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-66djs" Dec 13 04:02:24 crc kubenswrapper[4766]: I1213 04:02:24.896222 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-66djs" event={"ID":"d24473b2-c2f4-4ee9-8389-390a9b05d0ca","Type":"ContainerDied","Data":"4b7c6e80fc585c0088d81b4d5ba794533cf78f0e4f806a9af0489a7e267738b3"} Dec 13 04:02:24 crc kubenswrapper[4766]: I1213 04:02:24.896286 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-66djs" event={"ID":"d24473b2-c2f4-4ee9-8389-390a9b05d0ca","Type":"ContainerDied","Data":"b4f6a53de4b55f544c3ab1aad23716ccd4715f4956b0a809bd4118ac968af768"} Dec 13 04:02:24 crc kubenswrapper[4766]: I1213 04:02:24.896307 4766 scope.go:117] "RemoveContainer" containerID="4b7c6e80fc585c0088d81b4d5ba794533cf78f0e4f806a9af0489a7e267738b3" Dec 13 04:02:24 crc kubenswrapper[4766]: I1213 04:02:24.916079 4766 scope.go:117] "RemoveContainer" containerID="4b7c6e80fc585c0088d81b4d5ba794533cf78f0e4f806a9af0489a7e267738b3" Dec 13 04:02:24 crc kubenswrapper[4766]: E1213 04:02:24.917078 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b7c6e80fc585c0088d81b4d5ba794533cf78f0e4f806a9af0489a7e267738b3\": container with ID starting with 4b7c6e80fc585c0088d81b4d5ba794533cf78f0e4f806a9af0489a7e267738b3 not found: ID does not exist" containerID="4b7c6e80fc585c0088d81b4d5ba794533cf78f0e4f806a9af0489a7e267738b3" Dec 13 04:02:24 crc kubenswrapper[4766]: I1213 04:02:24.917118 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b7c6e80fc585c0088d81b4d5ba794533cf78f0e4f806a9af0489a7e267738b3"} err="failed to get container status \"4b7c6e80fc585c0088d81b4d5ba794533cf78f0e4f806a9af0489a7e267738b3\": rpc error: code = NotFound desc = could not find container \"4b7c6e80fc585c0088d81b4d5ba794533cf78f0e4f806a9af0489a7e267738b3\": container with ID starting with 4b7c6e80fc585c0088d81b4d5ba794533cf78f0e4f806a9af0489a7e267738b3 not found: ID does not exist" Dec 13 04:02:24 crc kubenswrapper[4766]: I1213 04:02:24.928875 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-66djs"] Dec 13 04:02:24 crc kubenswrapper[4766]: I1213 04:02:24.932559 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-66djs"] Dec 13 04:02:25 crc kubenswrapper[4766]: I1213 04:02:25.628087 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d24473b2-c2f4-4ee9-8389-390a9b05d0ca" path="/var/lib/kubelet/pods/d24473b2-c2f4-4ee9-8389-390a9b05d0ca/volumes" Dec 13 04:02:26 crc kubenswrapper[4766]: I1213 04:02:26.347966 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/rabbitmq-cluster-operator-index-b8rfq" Dec 13 04:02:26 crc kubenswrapper[4766]: I1213 04:02:26.349953 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/rabbitmq-cluster-operator-index-b8rfq" Dec 13 04:02:26 crc kubenswrapper[4766]: I1213 04:02:26.382232 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/rabbitmq-cluster-operator-index-b8rfq" Dec 13 04:02:27 crc kubenswrapper[4766]: I1213 04:02:27.012302 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/openstack-galera-2" podUID="19b5d6fe-47e9-4816-907d-af0d46b556d2" containerName="galera" probeResult="failure" output=< Dec 13 04:02:27 crc kubenswrapper[4766]: wsrep_local_state_comment (Donor/Desynced) differs from Synced Dec 13 04:02:27 crc kubenswrapper[4766]: > Dec 13 04:02:27 crc kubenswrapper[4766]: I1213 04:02:27.956083 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/rabbitmq-cluster-operator-index-b8rfq" Dec 13 04:02:30 crc kubenswrapper[4766]: I1213 04:02:30.404037 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/openstack-galera-0" Dec 13 04:02:30 crc kubenswrapper[4766]: I1213 04:02:30.450627 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/openstack-galera-0" Dec 13 04:02:31 crc kubenswrapper[4766]: I1213 04:02:31.691697 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/openstack-galera-1" Dec 13 04:02:31 crc kubenswrapper[4766]: I1213 04:02:31.730182 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/openstack-galera-1" Dec 13 04:02:41 crc kubenswrapper[4766]: I1213 04:02:41.462962 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9"] Dec 13 04:02:41 crc kubenswrapper[4766]: E1213 04:02:41.463985 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d24473b2-c2f4-4ee9-8389-390a9b05d0ca" containerName="registry-server" Dec 13 04:02:41 crc kubenswrapper[4766]: I1213 04:02:41.464009 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d24473b2-c2f4-4ee9-8389-390a9b05d0ca" containerName="registry-server" Dec 13 04:02:41 crc kubenswrapper[4766]: E1213 04:02:41.464027 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7ae1462-84a6-4eb2-a1ee-3b1634e09da2" containerName="extract-content" Dec 13 04:02:41 crc kubenswrapper[4766]: I1213 04:02:41.464034 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7ae1462-84a6-4eb2-a1ee-3b1634e09da2" containerName="extract-content" Dec 13 04:02:41 crc kubenswrapper[4766]: E1213 04:02:41.464044 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7ae1462-84a6-4eb2-a1ee-3b1634e09da2" containerName="registry-server" Dec 13 04:02:41 crc kubenswrapper[4766]: I1213 04:02:41.464051 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7ae1462-84a6-4eb2-a1ee-3b1634e09da2" containerName="registry-server" Dec 13 04:02:41 crc kubenswrapper[4766]: E1213 04:02:41.464063 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7ae1462-84a6-4eb2-a1ee-3b1634e09da2" containerName="extract-utilities" Dec 13 04:02:41 crc kubenswrapper[4766]: I1213 04:02:41.464071 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7ae1462-84a6-4eb2-a1ee-3b1634e09da2" containerName="extract-utilities" Dec 13 04:02:41 crc kubenswrapper[4766]: I1213 04:02:41.464227 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d24473b2-c2f4-4ee9-8389-390a9b05d0ca" containerName="registry-server" Dec 13 04:02:41 crc kubenswrapper[4766]: I1213 04:02:41.464242 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7ae1462-84a6-4eb2-a1ee-3b1634e09da2" containerName="registry-server" Dec 13 04:02:41 crc kubenswrapper[4766]: I1213 04:02:41.465532 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9" Dec 13 04:02:41 crc kubenswrapper[4766]: I1213 04:02:41.468238 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-vg794" Dec 13 04:02:41 crc kubenswrapper[4766]: I1213 04:02:41.481945 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9"] Dec 13 04:02:41 crc kubenswrapper[4766]: I1213 04:02:41.570700 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c7f3ff78-d2df-409f-96f8-6c92d67ba29f-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9\" (UID: \"c7f3ff78-d2df-409f-96f8-6c92d67ba29f\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9" Dec 13 04:02:41 crc kubenswrapper[4766]: I1213 04:02:41.571121 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c7f3ff78-d2df-409f-96f8-6c92d67ba29f-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9\" (UID: \"c7f3ff78-d2df-409f-96f8-6c92d67ba29f\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9" Dec 13 04:02:41 crc kubenswrapper[4766]: I1213 04:02:41.571156 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddp86\" (UniqueName: \"kubernetes.io/projected/c7f3ff78-d2df-409f-96f8-6c92d67ba29f-kube-api-access-ddp86\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9\" (UID: \"c7f3ff78-d2df-409f-96f8-6c92d67ba29f\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9" Dec 13 04:02:41 crc kubenswrapper[4766]: I1213 04:02:41.672397 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c7f3ff78-d2df-409f-96f8-6c92d67ba29f-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9\" (UID: \"c7f3ff78-d2df-409f-96f8-6c92d67ba29f\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9" Dec 13 04:02:41 crc kubenswrapper[4766]: I1213 04:02:41.672529 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c7f3ff78-d2df-409f-96f8-6c92d67ba29f-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9\" (UID: \"c7f3ff78-d2df-409f-96f8-6c92d67ba29f\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9" Dec 13 04:02:41 crc kubenswrapper[4766]: I1213 04:02:41.672554 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddp86\" (UniqueName: \"kubernetes.io/projected/c7f3ff78-d2df-409f-96f8-6c92d67ba29f-kube-api-access-ddp86\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9\" (UID: \"c7f3ff78-d2df-409f-96f8-6c92d67ba29f\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9" Dec 13 04:02:41 crc kubenswrapper[4766]: I1213 04:02:41.673365 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c7f3ff78-d2df-409f-96f8-6c92d67ba29f-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9\" (UID: \"c7f3ff78-d2df-409f-96f8-6c92d67ba29f\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9" Dec 13 04:02:41 crc kubenswrapper[4766]: I1213 04:02:41.673418 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c7f3ff78-d2df-409f-96f8-6c92d67ba29f-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9\" (UID: \"c7f3ff78-d2df-409f-96f8-6c92d67ba29f\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9" Dec 13 04:02:41 crc kubenswrapper[4766]: I1213 04:02:41.693582 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddp86\" (UniqueName: \"kubernetes.io/projected/c7f3ff78-d2df-409f-96f8-6c92d67ba29f-kube-api-access-ddp86\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9\" (UID: \"c7f3ff78-d2df-409f-96f8-6c92d67ba29f\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9" Dec 13 04:02:41 crc kubenswrapper[4766]: I1213 04:02:41.795466 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9" Dec 13 04:02:42 crc kubenswrapper[4766]: I1213 04:02:42.256290 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9"] Dec 13 04:02:43 crc kubenswrapper[4766]: I1213 04:02:43.022956 4766 generic.go:334] "Generic (PLEG): container finished" podID="c7f3ff78-d2df-409f-96f8-6c92d67ba29f" containerID="5002d996d81de0029b6f10a9b3b3ed5ece2e579d864f994042d130314555c2c4" exitCode=0 Dec 13 04:02:43 crc kubenswrapper[4766]: I1213 04:02:43.023052 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9" event={"ID":"c7f3ff78-d2df-409f-96f8-6c92d67ba29f","Type":"ContainerDied","Data":"5002d996d81de0029b6f10a9b3b3ed5ece2e579d864f994042d130314555c2c4"} Dec 13 04:02:43 crc kubenswrapper[4766]: I1213 04:02:43.023094 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9" event={"ID":"c7f3ff78-d2df-409f-96f8-6c92d67ba29f","Type":"ContainerStarted","Data":"f76827125004073a803858c8357fb4fb9f59b22a43c8759b8a03ae9dff3c7262"} Dec 13 04:02:44 crc kubenswrapper[4766]: I1213 04:02:44.030870 4766 generic.go:334] "Generic (PLEG): container finished" podID="c7f3ff78-d2df-409f-96f8-6c92d67ba29f" containerID="a2487dec1590502ae473b6388424ce6876894f89af80acce5abfdb004f0a2635" exitCode=0 Dec 13 04:02:44 crc kubenswrapper[4766]: I1213 04:02:44.030911 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9" event={"ID":"c7f3ff78-d2df-409f-96f8-6c92d67ba29f","Type":"ContainerDied","Data":"a2487dec1590502ae473b6388424ce6876894f89af80acce5abfdb004f0a2635"} Dec 13 04:02:45 crc kubenswrapper[4766]: I1213 04:02:45.038565 4766 generic.go:334] "Generic (PLEG): container finished" podID="c7f3ff78-d2df-409f-96f8-6c92d67ba29f" containerID="f1c010b1278d633b189cb5de8b7b504a5304e7d1fae40529fa89564c453108fb" exitCode=0 Dec 13 04:02:45 crc kubenswrapper[4766]: I1213 04:02:45.038630 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9" event={"ID":"c7f3ff78-d2df-409f-96f8-6c92d67ba29f","Type":"ContainerDied","Data":"f1c010b1278d633b189cb5de8b7b504a5304e7d1fae40529fa89564c453108fb"} Dec 13 04:02:46 crc kubenswrapper[4766]: I1213 04:02:46.292227 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9" Dec 13 04:02:46 crc kubenswrapper[4766]: I1213 04:02:46.439811 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c7f3ff78-d2df-409f-96f8-6c92d67ba29f-bundle\") pod \"c7f3ff78-d2df-409f-96f8-6c92d67ba29f\" (UID: \"c7f3ff78-d2df-409f-96f8-6c92d67ba29f\") " Dec 13 04:02:46 crc kubenswrapper[4766]: I1213 04:02:46.439900 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c7f3ff78-d2df-409f-96f8-6c92d67ba29f-util\") pod \"c7f3ff78-d2df-409f-96f8-6c92d67ba29f\" (UID: \"c7f3ff78-d2df-409f-96f8-6c92d67ba29f\") " Dec 13 04:02:46 crc kubenswrapper[4766]: I1213 04:02:46.439978 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddp86\" (UniqueName: \"kubernetes.io/projected/c7f3ff78-d2df-409f-96f8-6c92d67ba29f-kube-api-access-ddp86\") pod \"c7f3ff78-d2df-409f-96f8-6c92d67ba29f\" (UID: \"c7f3ff78-d2df-409f-96f8-6c92d67ba29f\") " Dec 13 04:02:46 crc kubenswrapper[4766]: I1213 04:02:46.440632 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7f3ff78-d2df-409f-96f8-6c92d67ba29f-bundle" (OuterVolumeSpecName: "bundle") pod "c7f3ff78-d2df-409f-96f8-6c92d67ba29f" (UID: "c7f3ff78-d2df-409f-96f8-6c92d67ba29f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:02:46 crc kubenswrapper[4766]: I1213 04:02:46.446504 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7f3ff78-d2df-409f-96f8-6c92d67ba29f-kube-api-access-ddp86" (OuterVolumeSpecName: "kube-api-access-ddp86") pod "c7f3ff78-d2df-409f-96f8-6c92d67ba29f" (UID: "c7f3ff78-d2df-409f-96f8-6c92d67ba29f"). InnerVolumeSpecName "kube-api-access-ddp86". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:02:46 crc kubenswrapper[4766]: I1213 04:02:46.456486 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7f3ff78-d2df-409f-96f8-6c92d67ba29f-util" (OuterVolumeSpecName: "util") pod "c7f3ff78-d2df-409f-96f8-6c92d67ba29f" (UID: "c7f3ff78-d2df-409f-96f8-6c92d67ba29f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:02:46 crc kubenswrapper[4766]: I1213 04:02:46.542040 4766 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c7f3ff78-d2df-409f-96f8-6c92d67ba29f-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 04:02:46 crc kubenswrapper[4766]: I1213 04:02:46.542076 4766 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c7f3ff78-d2df-409f-96f8-6c92d67ba29f-util\") on node \"crc\" DevicePath \"\"" Dec 13 04:02:46 crc kubenswrapper[4766]: I1213 04:02:46.542089 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddp86\" (UniqueName: \"kubernetes.io/projected/c7f3ff78-d2df-409f-96f8-6c92d67ba29f-kube-api-access-ddp86\") on node \"crc\" DevicePath \"\"" Dec 13 04:02:47 crc kubenswrapper[4766]: I1213 04:02:47.053157 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9" event={"ID":"c7f3ff78-d2df-409f-96f8-6c92d67ba29f","Type":"ContainerDied","Data":"f76827125004073a803858c8357fb4fb9f59b22a43c8759b8a03ae9dff3c7262"} Dec 13 04:02:47 crc kubenswrapper[4766]: I1213 04:02:47.053220 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f76827125004073a803858c8357fb4fb9f59b22a43c8759b8a03ae9dff3c7262" Dec 13 04:02:47 crc kubenswrapper[4766]: I1213 04:02:47.053304 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9" Dec 13 04:02:55 crc kubenswrapper[4766]: I1213 04:02:55.644637 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-cchkx"] Dec 13 04:02:55 crc kubenswrapper[4766]: E1213 04:02:55.645793 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7f3ff78-d2df-409f-96f8-6c92d67ba29f" containerName="pull" Dec 13 04:02:55 crc kubenswrapper[4766]: I1213 04:02:55.645816 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7f3ff78-d2df-409f-96f8-6c92d67ba29f" containerName="pull" Dec 13 04:02:55 crc kubenswrapper[4766]: E1213 04:02:55.645831 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7f3ff78-d2df-409f-96f8-6c92d67ba29f" containerName="util" Dec 13 04:02:55 crc kubenswrapper[4766]: I1213 04:02:55.645839 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7f3ff78-d2df-409f-96f8-6c92d67ba29f" containerName="util" Dec 13 04:02:55 crc kubenswrapper[4766]: E1213 04:02:55.645857 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7f3ff78-d2df-409f-96f8-6c92d67ba29f" containerName="extract" Dec 13 04:02:55 crc kubenswrapper[4766]: I1213 04:02:55.645865 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7f3ff78-d2df-409f-96f8-6c92d67ba29f" containerName="extract" Dec 13 04:02:55 crc kubenswrapper[4766]: I1213 04:02:55.646065 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7f3ff78-d2df-409f-96f8-6c92d67ba29f" containerName="extract" Dec 13 04:02:55 crc kubenswrapper[4766]: I1213 04:02:55.646718 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-cchkx" Dec 13 04:02:55 crc kubenswrapper[4766]: I1213 04:02:55.649852 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-dockercfg-rz8zr" Dec 13 04:02:55 crc kubenswrapper[4766]: I1213 04:02:55.660466 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-cchkx"] Dec 13 04:02:55 crc kubenswrapper[4766]: I1213 04:02:55.752329 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4mnj\" (UniqueName: \"kubernetes.io/projected/ec9e8dc9-892c-4716-9552-a94ae6996b2b-kube-api-access-q4mnj\") pod \"rabbitmq-cluster-operator-779fc9694b-cchkx\" (UID: \"ec9e8dc9-892c-4716-9552-a94ae6996b2b\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-cchkx" Dec 13 04:02:55 crc kubenswrapper[4766]: I1213 04:02:55.854614 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4mnj\" (UniqueName: \"kubernetes.io/projected/ec9e8dc9-892c-4716-9552-a94ae6996b2b-kube-api-access-q4mnj\") pod \"rabbitmq-cluster-operator-779fc9694b-cchkx\" (UID: \"ec9e8dc9-892c-4716-9552-a94ae6996b2b\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-cchkx" Dec 13 04:02:55 crc kubenswrapper[4766]: I1213 04:02:55.880200 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4mnj\" (UniqueName: \"kubernetes.io/projected/ec9e8dc9-892c-4716-9552-a94ae6996b2b-kube-api-access-q4mnj\") pod \"rabbitmq-cluster-operator-779fc9694b-cchkx\" (UID: \"ec9e8dc9-892c-4716-9552-a94ae6996b2b\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-cchkx" Dec 13 04:02:55 crc kubenswrapper[4766]: I1213 04:02:55.981195 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-cchkx" Dec 13 04:02:56 crc kubenswrapper[4766]: I1213 04:02:56.245211 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-cchkx"] Dec 13 04:02:57 crc kubenswrapper[4766]: I1213 04:02:57.141651 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-cchkx" event={"ID":"ec9e8dc9-892c-4716-9552-a94ae6996b2b","Type":"ContainerStarted","Data":"1e345657fa53c535591b0bee79151d2ea02c41e46672947e828ef64aea482991"} Dec 13 04:03:02 crc kubenswrapper[4766]: I1213 04:03:02.179581 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-cchkx" event={"ID":"ec9e8dc9-892c-4716-9552-a94ae6996b2b","Type":"ContainerStarted","Data":"42b5ae35f3882dfe6bf96d194285f0269961354c31073eedb814be407211024a"} Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.215158 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-cchkx" podStartSLOduration=4.360699425 podStartE2EDuration="9.215133586s" podCreationTimestamp="2025-12-13 04:02:55 +0000 UTC" firstStartedPulling="2025-12-13 04:02:56.292772386 +0000 UTC m=+1107.802705350" lastFinishedPulling="2025-12-13 04:03:01.147206547 +0000 UTC m=+1112.657139511" observedRunningTime="2025-12-13 04:03:02.201486422 +0000 UTC m=+1113.711419426" watchObservedRunningTime="2025-12-13 04:03:04.215133586 +0000 UTC m=+1115.725066550" Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.215869 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/rabbitmq-server-0"] Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.217708 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/rabbitmq-server-0" Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.220839 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"rabbitmq-server-conf" Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.221067 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"rabbitmq-default-user" Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.221253 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"rabbitmq-server-dockercfg-mkzzg" Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.221386 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"rabbitmq-plugins-conf" Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.222243 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"rabbitmq-erlang-cookie" Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.229091 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/rabbitmq-server-0"] Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.371237 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9ae00965-a778-4106-85dd-84fba5782c83-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9ae00965-a778-4106-85dd-84fba5782c83\") " pod="glance-kuttl-tests/rabbitmq-server-0" Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.371329 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cee6d03b-04f4-4f31-afac-15151c23315f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cee6d03b-04f4-4f31-afac-15151c23315f\") pod \"rabbitmq-server-0\" (UID: \"9ae00965-a778-4106-85dd-84fba5782c83\") " pod="glance-kuttl-tests/rabbitmq-server-0" Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.371365 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9ae00965-a778-4106-85dd-84fba5782c83-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9ae00965-a778-4106-85dd-84fba5782c83\") " pod="glance-kuttl-tests/rabbitmq-server-0" Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.371397 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9ae00965-a778-4106-85dd-84fba5782c83-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9ae00965-a778-4106-85dd-84fba5782c83\") " pod="glance-kuttl-tests/rabbitmq-server-0" Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.371440 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g6hx\" (UniqueName: \"kubernetes.io/projected/9ae00965-a778-4106-85dd-84fba5782c83-kube-api-access-9g6hx\") pod \"rabbitmq-server-0\" (UID: \"9ae00965-a778-4106-85dd-84fba5782c83\") " pod="glance-kuttl-tests/rabbitmq-server-0" Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.371488 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9ae00965-a778-4106-85dd-84fba5782c83-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9ae00965-a778-4106-85dd-84fba5782c83\") " pod="glance-kuttl-tests/rabbitmq-server-0" Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.371622 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9ae00965-a778-4106-85dd-84fba5782c83-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9ae00965-a778-4106-85dd-84fba5782c83\") " pod="glance-kuttl-tests/rabbitmq-server-0" Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.371725 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9ae00965-a778-4106-85dd-84fba5782c83-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9ae00965-a778-4106-85dd-84fba5782c83\") " pod="glance-kuttl-tests/rabbitmq-server-0" Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.473714 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9ae00965-a778-4106-85dd-84fba5782c83-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9ae00965-a778-4106-85dd-84fba5782c83\") " pod="glance-kuttl-tests/rabbitmq-server-0" Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.473803 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9ae00965-a778-4106-85dd-84fba5782c83-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9ae00965-a778-4106-85dd-84fba5782c83\") " pod="glance-kuttl-tests/rabbitmq-server-0" Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.473844 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cee6d03b-04f4-4f31-afac-15151c23315f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cee6d03b-04f4-4f31-afac-15151c23315f\") pod \"rabbitmq-server-0\" (UID: \"9ae00965-a778-4106-85dd-84fba5782c83\") " pod="glance-kuttl-tests/rabbitmq-server-0" Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.473864 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9ae00965-a778-4106-85dd-84fba5782c83-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9ae00965-a778-4106-85dd-84fba5782c83\") " pod="glance-kuttl-tests/rabbitmq-server-0" Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.473888 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9ae00965-a778-4106-85dd-84fba5782c83-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9ae00965-a778-4106-85dd-84fba5782c83\") " pod="glance-kuttl-tests/rabbitmq-server-0" Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.473923 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g6hx\" (UniqueName: \"kubernetes.io/projected/9ae00965-a778-4106-85dd-84fba5782c83-kube-api-access-9g6hx\") pod \"rabbitmq-server-0\" (UID: \"9ae00965-a778-4106-85dd-84fba5782c83\") " pod="glance-kuttl-tests/rabbitmq-server-0" Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.473941 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9ae00965-a778-4106-85dd-84fba5782c83-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9ae00965-a778-4106-85dd-84fba5782c83\") " pod="glance-kuttl-tests/rabbitmq-server-0" Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.473973 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9ae00965-a778-4106-85dd-84fba5782c83-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9ae00965-a778-4106-85dd-84fba5782c83\") " pod="glance-kuttl-tests/rabbitmq-server-0" Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.475151 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9ae00965-a778-4106-85dd-84fba5782c83-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9ae00965-a778-4106-85dd-84fba5782c83\") " pod="glance-kuttl-tests/rabbitmq-server-0" Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.475696 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9ae00965-a778-4106-85dd-84fba5782c83-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9ae00965-a778-4106-85dd-84fba5782c83\") " pod="glance-kuttl-tests/rabbitmq-server-0" Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.475773 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9ae00965-a778-4106-85dd-84fba5782c83-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9ae00965-a778-4106-85dd-84fba5782c83\") " pod="glance-kuttl-tests/rabbitmq-server-0" Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.488172 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9ae00965-a778-4106-85dd-84fba5782c83-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9ae00965-a778-4106-85dd-84fba5782c83\") " pod="glance-kuttl-tests/rabbitmq-server-0" Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.488335 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9ae00965-a778-4106-85dd-84fba5782c83-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9ae00965-a778-4106-85dd-84fba5782c83\") " pod="glance-kuttl-tests/rabbitmq-server-0" Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.489827 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.489859 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cee6d03b-04f4-4f31-afac-15151c23315f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cee6d03b-04f4-4f31-afac-15151c23315f\") pod \"rabbitmq-server-0\" (UID: \"9ae00965-a778-4106-85dd-84fba5782c83\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a25b546839ff1a61806ed8b74b466d25227dec3eba12ac3333070ace0afb9272/globalmount\"" pod="glance-kuttl-tests/rabbitmq-server-0" Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.495015 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9ae00965-a778-4106-85dd-84fba5782c83-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9ae00965-a778-4106-85dd-84fba5782c83\") " pod="glance-kuttl-tests/rabbitmq-server-0" Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.496176 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g6hx\" (UniqueName: \"kubernetes.io/projected/9ae00965-a778-4106-85dd-84fba5782c83-kube-api-access-9g6hx\") pod \"rabbitmq-server-0\" (UID: \"9ae00965-a778-4106-85dd-84fba5782c83\") " pod="glance-kuttl-tests/rabbitmq-server-0" Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.517554 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cee6d03b-04f4-4f31-afac-15151c23315f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cee6d03b-04f4-4f31-afac-15151c23315f\") pod \"rabbitmq-server-0\" (UID: \"9ae00965-a778-4106-85dd-84fba5782c83\") " pod="glance-kuttl-tests/rabbitmq-server-0" Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.548473 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/rabbitmq-server-0" Dec 13 04:03:04 crc kubenswrapper[4766]: I1213 04:03:04.790101 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/rabbitmq-server-0"] Dec 13 04:03:04 crc kubenswrapper[4766]: W1213 04:03:04.802626 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ae00965_a778_4106_85dd_84fba5782c83.slice/crio-04a58605e171d60121826d1017a0c494eb06418e436c6033a0adf3567bb4aa9c WatchSource:0}: Error finding container 04a58605e171d60121826d1017a0c494eb06418e436c6033a0adf3567bb4aa9c: Status 404 returned error can't find the container with id 04a58605e171d60121826d1017a0c494eb06418e436c6033a0adf3567bb4aa9c Dec 13 04:03:05 crc kubenswrapper[4766]: I1213 04:03:05.217436 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/rabbitmq-server-0" event={"ID":"9ae00965-a778-4106-85dd-84fba5782c83","Type":"ContainerStarted","Data":"04a58605e171d60121826d1017a0c494eb06418e436c6033a0adf3567bb4aa9c"} Dec 13 04:03:05 crc kubenswrapper[4766]: I1213 04:03:05.631027 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-index-hn8rh"] Dec 13 04:03:05 crc kubenswrapper[4766]: I1213 04:03:05.631899 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-hn8rh" Dec 13 04:03:05 crc kubenswrapper[4766]: I1213 04:03:05.634456 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-index-dockercfg-5g7md" Dec 13 04:03:05 crc kubenswrapper[4766]: I1213 04:03:05.651858 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-hn8rh"] Dec 13 04:03:05 crc kubenswrapper[4766]: I1213 04:03:05.803701 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrds9\" (UniqueName: \"kubernetes.io/projected/002ad17d-8277-4767-824e-93b28859e781-kube-api-access-zrds9\") pod \"keystone-operator-index-hn8rh\" (UID: \"002ad17d-8277-4767-824e-93b28859e781\") " pod="openstack-operators/keystone-operator-index-hn8rh" Dec 13 04:03:05 crc kubenswrapper[4766]: I1213 04:03:05.905213 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrds9\" (UniqueName: \"kubernetes.io/projected/002ad17d-8277-4767-824e-93b28859e781-kube-api-access-zrds9\") pod \"keystone-operator-index-hn8rh\" (UID: \"002ad17d-8277-4767-824e-93b28859e781\") " pod="openstack-operators/keystone-operator-index-hn8rh" Dec 13 04:03:05 crc kubenswrapper[4766]: I1213 04:03:05.926696 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrds9\" (UniqueName: \"kubernetes.io/projected/002ad17d-8277-4767-824e-93b28859e781-kube-api-access-zrds9\") pod \"keystone-operator-index-hn8rh\" (UID: \"002ad17d-8277-4767-824e-93b28859e781\") " pod="openstack-operators/keystone-operator-index-hn8rh" Dec 13 04:03:05 crc kubenswrapper[4766]: I1213 04:03:05.954689 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-hn8rh" Dec 13 04:03:06 crc kubenswrapper[4766]: I1213 04:03:06.320379 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-hn8rh"] Dec 13 04:03:07 crc kubenswrapper[4766]: I1213 04:03:07.269987 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-hn8rh" event={"ID":"002ad17d-8277-4767-824e-93b28859e781","Type":"ContainerStarted","Data":"e69ceee2942da2eb5da97a898b322f8f085563b3f59f865fff664b04a30755a4"} Dec 13 04:03:13 crc kubenswrapper[4766]: I1213 04:03:13.327676 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/rabbitmq-server-0" event={"ID":"9ae00965-a778-4106-85dd-84fba5782c83","Type":"ContainerStarted","Data":"7bd860f1c13e071c938cd204c3039ade63dd1538291bd438a088705ca2a05c39"} Dec 13 04:03:13 crc kubenswrapper[4766]: I1213 04:03:13.330115 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-hn8rh" event={"ID":"002ad17d-8277-4767-824e-93b28859e781","Type":"ContainerStarted","Data":"537a5345b46a3cf524a12e8ad1eb572f13e03b40ad89f7224a173055da7314c7"} Dec 13 04:03:13 crc kubenswrapper[4766]: I1213 04:03:13.374264 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-index-hn8rh" podStartSLOduration=2.542072212 podStartE2EDuration="8.374243603s" podCreationTimestamp="2025-12-13 04:03:05 +0000 UTC" firstStartedPulling="2025-12-13 04:03:06.327479207 +0000 UTC m=+1117.837412171" lastFinishedPulling="2025-12-13 04:03:12.159650608 +0000 UTC m=+1123.669583562" observedRunningTime="2025-12-13 04:03:13.370803334 +0000 UTC m=+1124.880736318" watchObservedRunningTime="2025-12-13 04:03:13.374243603 +0000 UTC m=+1124.884176567" Dec 13 04:03:15 crc kubenswrapper[4766]: I1213 04:03:15.956705 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/keystone-operator-index-hn8rh" Dec 13 04:03:15 crc kubenswrapper[4766]: I1213 04:03:15.957061 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-index-hn8rh" Dec 13 04:03:15 crc kubenswrapper[4766]: I1213 04:03:15.985901 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/keystone-operator-index-hn8rh" Dec 13 04:03:25 crc kubenswrapper[4766]: I1213 04:03:25.987541 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-index-hn8rh" Dec 13 04:03:33 crc kubenswrapper[4766]: I1213 04:03:33.472190 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq"] Dec 13 04:03:33 crc kubenswrapper[4766]: I1213 04:03:33.475111 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq" Dec 13 04:03:33 crc kubenswrapper[4766]: I1213 04:03:33.477695 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-vg794" Dec 13 04:03:33 crc kubenswrapper[4766]: I1213 04:03:33.481228 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq"] Dec 13 04:03:33 crc kubenswrapper[4766]: I1213 04:03:33.570068 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7rg2\" (UniqueName: \"kubernetes.io/projected/0b23e5f0-5ae8-4925-a71c-8c78b43b7814-kube-api-access-j7rg2\") pod \"a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq\" (UID: \"0b23e5f0-5ae8-4925-a71c-8c78b43b7814\") " pod="openstack-operators/a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq" Dec 13 04:03:33 crc kubenswrapper[4766]: I1213 04:03:33.570315 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b23e5f0-5ae8-4925-a71c-8c78b43b7814-util\") pod \"a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq\" (UID: \"0b23e5f0-5ae8-4925-a71c-8c78b43b7814\") " pod="openstack-operators/a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq" Dec 13 04:03:33 crc kubenswrapper[4766]: I1213 04:03:33.570346 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b23e5f0-5ae8-4925-a71c-8c78b43b7814-bundle\") pod \"a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq\" (UID: \"0b23e5f0-5ae8-4925-a71c-8c78b43b7814\") " pod="openstack-operators/a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq" Dec 13 04:03:33 crc kubenswrapper[4766]: I1213 04:03:33.672271 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7rg2\" (UniqueName: \"kubernetes.io/projected/0b23e5f0-5ae8-4925-a71c-8c78b43b7814-kube-api-access-j7rg2\") pod \"a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq\" (UID: \"0b23e5f0-5ae8-4925-a71c-8c78b43b7814\") " pod="openstack-operators/a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq" Dec 13 04:03:33 crc kubenswrapper[4766]: I1213 04:03:33.672537 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b23e5f0-5ae8-4925-a71c-8c78b43b7814-util\") pod \"a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq\" (UID: \"0b23e5f0-5ae8-4925-a71c-8c78b43b7814\") " pod="openstack-operators/a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq" Dec 13 04:03:33 crc kubenswrapper[4766]: I1213 04:03:33.672560 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b23e5f0-5ae8-4925-a71c-8c78b43b7814-bundle\") pod \"a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq\" (UID: \"0b23e5f0-5ae8-4925-a71c-8c78b43b7814\") " pod="openstack-operators/a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq" Dec 13 04:03:33 crc kubenswrapper[4766]: I1213 04:03:33.673730 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b23e5f0-5ae8-4925-a71c-8c78b43b7814-util\") pod \"a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq\" (UID: \"0b23e5f0-5ae8-4925-a71c-8c78b43b7814\") " pod="openstack-operators/a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq" Dec 13 04:03:33 crc kubenswrapper[4766]: I1213 04:03:33.673940 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b23e5f0-5ae8-4925-a71c-8c78b43b7814-bundle\") pod \"a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq\" (UID: \"0b23e5f0-5ae8-4925-a71c-8c78b43b7814\") " pod="openstack-operators/a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq" Dec 13 04:03:33 crc kubenswrapper[4766]: I1213 04:03:33.699359 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7rg2\" (UniqueName: \"kubernetes.io/projected/0b23e5f0-5ae8-4925-a71c-8c78b43b7814-kube-api-access-j7rg2\") pod \"a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq\" (UID: \"0b23e5f0-5ae8-4925-a71c-8c78b43b7814\") " pod="openstack-operators/a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq" Dec 13 04:03:33 crc kubenswrapper[4766]: I1213 04:03:33.805964 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq" Dec 13 04:03:34 crc kubenswrapper[4766]: I1213 04:03:34.298490 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq"] Dec 13 04:03:34 crc kubenswrapper[4766]: I1213 04:03:34.516173 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq" event={"ID":"0b23e5f0-5ae8-4925-a71c-8c78b43b7814","Type":"ContainerStarted","Data":"b17dcb611bc5ae8e1e6a8e627be115ddd866cff8a77aacccaeda6a69f76dbcb5"} Dec 13 04:03:34 crc kubenswrapper[4766]: I1213 04:03:34.516264 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq" event={"ID":"0b23e5f0-5ae8-4925-a71c-8c78b43b7814","Type":"ContainerStarted","Data":"a317f6f068da61cc0416cb4adec7b962ece2f143fc7bbf650207de8ebf0ca9d2"} Dec 13 04:03:35 crc kubenswrapper[4766]: I1213 04:03:35.523771 4766 generic.go:334] "Generic (PLEG): container finished" podID="0b23e5f0-5ae8-4925-a71c-8c78b43b7814" containerID="b17dcb611bc5ae8e1e6a8e627be115ddd866cff8a77aacccaeda6a69f76dbcb5" exitCode=0 Dec 13 04:03:35 crc kubenswrapper[4766]: I1213 04:03:35.523837 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq" event={"ID":"0b23e5f0-5ae8-4925-a71c-8c78b43b7814","Type":"ContainerDied","Data":"b17dcb611bc5ae8e1e6a8e627be115ddd866cff8a77aacccaeda6a69f76dbcb5"} Dec 13 04:03:35 crc kubenswrapper[4766]: I1213 04:03:35.526111 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 13 04:03:37 crc kubenswrapper[4766]: I1213 04:03:37.539655 4766 generic.go:334] "Generic (PLEG): container finished" podID="0b23e5f0-5ae8-4925-a71c-8c78b43b7814" containerID="107ada432ee25dc86a2e1c46c8e29ce04f9a99014ccafcca59c64708692e98a7" exitCode=0 Dec 13 04:03:37 crc kubenswrapper[4766]: I1213 04:03:37.539722 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq" event={"ID":"0b23e5f0-5ae8-4925-a71c-8c78b43b7814","Type":"ContainerDied","Data":"107ada432ee25dc86a2e1c46c8e29ce04f9a99014ccafcca59c64708692e98a7"} Dec 13 04:03:38 crc kubenswrapper[4766]: I1213 04:03:38.550358 4766 generic.go:334] "Generic (PLEG): container finished" podID="0b23e5f0-5ae8-4925-a71c-8c78b43b7814" containerID="320da656c032ca9f8bb04e078be2b8a938753bae39117b924dd48934cff5762f" exitCode=0 Dec 13 04:03:38 crc kubenswrapper[4766]: I1213 04:03:38.550440 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq" event={"ID":"0b23e5f0-5ae8-4925-a71c-8c78b43b7814","Type":"ContainerDied","Data":"320da656c032ca9f8bb04e078be2b8a938753bae39117b924dd48934cff5762f"} Dec 13 04:03:39 crc kubenswrapper[4766]: I1213 04:03:39.819374 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq" Dec 13 04:03:39 crc kubenswrapper[4766]: I1213 04:03:39.964734 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7rg2\" (UniqueName: \"kubernetes.io/projected/0b23e5f0-5ae8-4925-a71c-8c78b43b7814-kube-api-access-j7rg2\") pod \"0b23e5f0-5ae8-4925-a71c-8c78b43b7814\" (UID: \"0b23e5f0-5ae8-4925-a71c-8c78b43b7814\") " Dec 13 04:03:39 crc kubenswrapper[4766]: I1213 04:03:39.964784 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b23e5f0-5ae8-4925-a71c-8c78b43b7814-bundle\") pod \"0b23e5f0-5ae8-4925-a71c-8c78b43b7814\" (UID: \"0b23e5f0-5ae8-4925-a71c-8c78b43b7814\") " Dec 13 04:03:39 crc kubenswrapper[4766]: I1213 04:03:39.964951 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b23e5f0-5ae8-4925-a71c-8c78b43b7814-util\") pod \"0b23e5f0-5ae8-4925-a71c-8c78b43b7814\" (UID: \"0b23e5f0-5ae8-4925-a71c-8c78b43b7814\") " Dec 13 04:03:39 crc kubenswrapper[4766]: I1213 04:03:39.966065 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b23e5f0-5ae8-4925-a71c-8c78b43b7814-bundle" (OuterVolumeSpecName: "bundle") pod "0b23e5f0-5ae8-4925-a71c-8c78b43b7814" (UID: "0b23e5f0-5ae8-4925-a71c-8c78b43b7814"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:03:39 crc kubenswrapper[4766]: I1213 04:03:39.974214 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b23e5f0-5ae8-4925-a71c-8c78b43b7814-kube-api-access-j7rg2" (OuterVolumeSpecName: "kube-api-access-j7rg2") pod "0b23e5f0-5ae8-4925-a71c-8c78b43b7814" (UID: "0b23e5f0-5ae8-4925-a71c-8c78b43b7814"). InnerVolumeSpecName "kube-api-access-j7rg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:03:39 crc kubenswrapper[4766]: I1213 04:03:39.981981 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b23e5f0-5ae8-4925-a71c-8c78b43b7814-util" (OuterVolumeSpecName: "util") pod "0b23e5f0-5ae8-4925-a71c-8c78b43b7814" (UID: "0b23e5f0-5ae8-4925-a71c-8c78b43b7814"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:03:40 crc kubenswrapper[4766]: I1213 04:03:40.067157 4766 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b23e5f0-5ae8-4925-a71c-8c78b43b7814-util\") on node \"crc\" DevicePath \"\"" Dec 13 04:03:40 crc kubenswrapper[4766]: I1213 04:03:40.067195 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7rg2\" (UniqueName: \"kubernetes.io/projected/0b23e5f0-5ae8-4925-a71c-8c78b43b7814-kube-api-access-j7rg2\") on node \"crc\" DevicePath \"\"" Dec 13 04:03:40 crc kubenswrapper[4766]: I1213 04:03:40.067209 4766 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b23e5f0-5ae8-4925-a71c-8c78b43b7814-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 04:03:40 crc kubenswrapper[4766]: I1213 04:03:40.566749 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq" event={"ID":"0b23e5f0-5ae8-4925-a71c-8c78b43b7814","Type":"ContainerDied","Data":"a317f6f068da61cc0416cb4adec7b962ece2f143fc7bbf650207de8ebf0ca9d2"} Dec 13 04:03:40 crc kubenswrapper[4766]: I1213 04:03:40.566812 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a317f6f068da61cc0416cb4adec7b962ece2f143fc7bbf650207de8ebf0ca9d2" Dec 13 04:03:40 crc kubenswrapper[4766]: I1213 04:03:40.566850 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq" Dec 13 04:03:45 crc kubenswrapper[4766]: I1213 04:03:45.610953 4766 generic.go:334] "Generic (PLEG): container finished" podID="9ae00965-a778-4106-85dd-84fba5782c83" containerID="7bd860f1c13e071c938cd204c3039ade63dd1538291bd438a088705ca2a05c39" exitCode=0 Dec 13 04:03:45 crc kubenswrapper[4766]: I1213 04:03:45.611116 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/rabbitmq-server-0" event={"ID":"9ae00965-a778-4106-85dd-84fba5782c83","Type":"ContainerDied","Data":"7bd860f1c13e071c938cd204c3039ade63dd1538291bd438a088705ca2a05c39"} Dec 13 04:03:46 crc kubenswrapper[4766]: I1213 04:03:46.626692 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/rabbitmq-server-0" event={"ID":"9ae00965-a778-4106-85dd-84fba5782c83","Type":"ContainerStarted","Data":"c76c4a003a9a570de24f7afe744abe893d62276cbc05db754a5c952b14f4c12f"} Dec 13 04:03:46 crc kubenswrapper[4766]: I1213 04:03:46.627305 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/rabbitmq-server-0" Dec 13 04:03:46 crc kubenswrapper[4766]: I1213 04:03:46.658512 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/rabbitmq-server-0" podStartSLOduration=36.229734706 podStartE2EDuration="43.658489639s" podCreationTimestamp="2025-12-13 04:03:03 +0000 UTC" firstStartedPulling="2025-12-13 04:03:04.809943923 +0000 UTC m=+1116.319876887" lastFinishedPulling="2025-12-13 04:03:12.238698846 +0000 UTC m=+1123.748631820" observedRunningTime="2025-12-13 04:03:46.650385265 +0000 UTC m=+1158.160318239" watchObservedRunningTime="2025-12-13 04:03:46.658489639 +0000 UTC m=+1158.168422603" Dec 13 04:04:02 crc kubenswrapper[4766]: I1213 04:04:02.502667 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-59f97d4bd8-pqmhg"] Dec 13 04:04:02 crc kubenswrapper[4766]: E1213 04:04:02.503598 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b23e5f0-5ae8-4925-a71c-8c78b43b7814" containerName="util" Dec 13 04:04:02 crc kubenswrapper[4766]: I1213 04:04:02.503625 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b23e5f0-5ae8-4925-a71c-8c78b43b7814" containerName="util" Dec 13 04:04:02 crc kubenswrapper[4766]: E1213 04:04:02.503638 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b23e5f0-5ae8-4925-a71c-8c78b43b7814" containerName="pull" Dec 13 04:04:02 crc kubenswrapper[4766]: I1213 04:04:02.503644 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b23e5f0-5ae8-4925-a71c-8c78b43b7814" containerName="pull" Dec 13 04:04:02 crc kubenswrapper[4766]: E1213 04:04:02.503663 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b23e5f0-5ae8-4925-a71c-8c78b43b7814" containerName="extract" Dec 13 04:04:02 crc kubenswrapper[4766]: I1213 04:04:02.503670 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b23e5f0-5ae8-4925-a71c-8c78b43b7814" containerName="extract" Dec 13 04:04:02 crc kubenswrapper[4766]: I1213 04:04:02.503811 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b23e5f0-5ae8-4925-a71c-8c78b43b7814" containerName="extract" Dec 13 04:04:02 crc kubenswrapper[4766]: I1213 04:04:02.504630 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-59f97d4bd8-pqmhg" Dec 13 04:04:02 crc kubenswrapper[4766]: I1213 04:04:02.508542 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-service-cert" Dec 13 04:04:02 crc kubenswrapper[4766]: I1213 04:04:02.515722 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-2khnr" Dec 13 04:04:02 crc kubenswrapper[4766]: I1213 04:04:02.518328 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-59f97d4bd8-pqmhg"] Dec 13 04:04:02 crc kubenswrapper[4766]: I1213 04:04:02.556804 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmr4j\" (UniqueName: \"kubernetes.io/projected/6b50297f-eb50-4348-8a02-05ed4e3c2d61-kube-api-access-xmr4j\") pod \"keystone-operator-controller-manager-59f97d4bd8-pqmhg\" (UID: \"6b50297f-eb50-4348-8a02-05ed4e3c2d61\") " pod="openstack-operators/keystone-operator-controller-manager-59f97d4bd8-pqmhg" Dec 13 04:04:02 crc kubenswrapper[4766]: I1213 04:04:02.556896 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6b50297f-eb50-4348-8a02-05ed4e3c2d61-webhook-cert\") pod \"keystone-operator-controller-manager-59f97d4bd8-pqmhg\" (UID: \"6b50297f-eb50-4348-8a02-05ed4e3c2d61\") " pod="openstack-operators/keystone-operator-controller-manager-59f97d4bd8-pqmhg" Dec 13 04:04:02 crc kubenswrapper[4766]: I1213 04:04:02.556942 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6b50297f-eb50-4348-8a02-05ed4e3c2d61-apiservice-cert\") pod \"keystone-operator-controller-manager-59f97d4bd8-pqmhg\" (UID: \"6b50297f-eb50-4348-8a02-05ed4e3c2d61\") " pod="openstack-operators/keystone-operator-controller-manager-59f97d4bd8-pqmhg" Dec 13 04:04:02 crc kubenswrapper[4766]: I1213 04:04:02.658393 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmr4j\" (UniqueName: \"kubernetes.io/projected/6b50297f-eb50-4348-8a02-05ed4e3c2d61-kube-api-access-xmr4j\") pod \"keystone-operator-controller-manager-59f97d4bd8-pqmhg\" (UID: \"6b50297f-eb50-4348-8a02-05ed4e3c2d61\") " pod="openstack-operators/keystone-operator-controller-manager-59f97d4bd8-pqmhg" Dec 13 04:04:02 crc kubenswrapper[4766]: I1213 04:04:02.658462 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6b50297f-eb50-4348-8a02-05ed4e3c2d61-webhook-cert\") pod \"keystone-operator-controller-manager-59f97d4bd8-pqmhg\" (UID: \"6b50297f-eb50-4348-8a02-05ed4e3c2d61\") " pod="openstack-operators/keystone-operator-controller-manager-59f97d4bd8-pqmhg" Dec 13 04:04:02 crc kubenswrapper[4766]: I1213 04:04:02.658496 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6b50297f-eb50-4348-8a02-05ed4e3c2d61-apiservice-cert\") pod \"keystone-operator-controller-manager-59f97d4bd8-pqmhg\" (UID: \"6b50297f-eb50-4348-8a02-05ed4e3c2d61\") " pod="openstack-operators/keystone-operator-controller-manager-59f97d4bd8-pqmhg" Dec 13 04:04:02 crc kubenswrapper[4766]: I1213 04:04:02.664306 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6b50297f-eb50-4348-8a02-05ed4e3c2d61-apiservice-cert\") pod \"keystone-operator-controller-manager-59f97d4bd8-pqmhg\" (UID: \"6b50297f-eb50-4348-8a02-05ed4e3c2d61\") " pod="openstack-operators/keystone-operator-controller-manager-59f97d4bd8-pqmhg" Dec 13 04:04:02 crc kubenswrapper[4766]: I1213 04:04:02.665041 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6b50297f-eb50-4348-8a02-05ed4e3c2d61-webhook-cert\") pod \"keystone-operator-controller-manager-59f97d4bd8-pqmhg\" (UID: \"6b50297f-eb50-4348-8a02-05ed4e3c2d61\") " pod="openstack-operators/keystone-operator-controller-manager-59f97d4bd8-pqmhg" Dec 13 04:04:02 crc kubenswrapper[4766]: I1213 04:04:02.681104 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmr4j\" (UniqueName: \"kubernetes.io/projected/6b50297f-eb50-4348-8a02-05ed4e3c2d61-kube-api-access-xmr4j\") pod \"keystone-operator-controller-manager-59f97d4bd8-pqmhg\" (UID: \"6b50297f-eb50-4348-8a02-05ed4e3c2d61\") " pod="openstack-operators/keystone-operator-controller-manager-59f97d4bd8-pqmhg" Dec 13 04:04:02 crc kubenswrapper[4766]: I1213 04:04:02.825884 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-59f97d4bd8-pqmhg" Dec 13 04:04:03 crc kubenswrapper[4766]: I1213 04:04:03.258649 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-59f97d4bd8-pqmhg"] Dec 13 04:04:03 crc kubenswrapper[4766]: I1213 04:04:03.759245 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-59f97d4bd8-pqmhg" event={"ID":"6b50297f-eb50-4348-8a02-05ed4e3c2d61","Type":"ContainerStarted","Data":"dc238c08008b29542fcd8e6985ebda41edc7d501e579db23304261945a2020ed"} Dec 13 04:04:04 crc kubenswrapper[4766]: I1213 04:04:04.553165 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/rabbitmq-server-0" Dec 13 04:04:05 crc kubenswrapper[4766]: I1213 04:04:05.774956 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-59f97d4bd8-pqmhg" event={"ID":"6b50297f-eb50-4348-8a02-05ed4e3c2d61","Type":"ContainerStarted","Data":"665ecc3b606b4fa9fa27bf630481fb129157f2dfd10d361950b31910c0483c37"} Dec 13 04:04:05 crc kubenswrapper[4766]: I1213 04:04:05.775296 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-59f97d4bd8-pqmhg" Dec 13 04:04:05 crc kubenswrapper[4766]: I1213 04:04:05.775308 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-59f97d4bd8-pqmhg" event={"ID":"6b50297f-eb50-4348-8a02-05ed4e3c2d61","Type":"ContainerStarted","Data":"c01db03325494653425db6d3818e683d290e174724ea64b4c7d98faee65dea5c"} Dec 13 04:04:05 crc kubenswrapper[4766]: I1213 04:04:05.796481 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-59f97d4bd8-pqmhg" podStartSLOduration=1.7331642390000002 podStartE2EDuration="3.796464881s" podCreationTimestamp="2025-12-13 04:04:02 +0000 UTC" firstStartedPulling="2025-12-13 04:04:03.269625492 +0000 UTC m=+1174.779558446" lastFinishedPulling="2025-12-13 04:04:05.332926114 +0000 UTC m=+1176.842859088" observedRunningTime="2025-12-13 04:04:05.792757623 +0000 UTC m=+1177.302690597" watchObservedRunningTime="2025-12-13 04:04:05.796464881 +0000 UTC m=+1177.306397845" Dec 13 04:04:12 crc kubenswrapper[4766]: I1213 04:04:12.829748 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-59f97d4bd8-pqmhg" Dec 13 04:04:14 crc kubenswrapper[4766]: I1213 04:04:14.818446 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystone-db-create-k7gmq"] Dec 13 04:04:14 crc kubenswrapper[4766]: I1213 04:04:14.819475 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-create-k7gmq" Dec 13 04:04:14 crc kubenswrapper[4766]: I1213 04:04:14.836352 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-db-create-k7gmq"] Dec 13 04:04:14 crc kubenswrapper[4766]: I1213 04:04:14.941850 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg6q8\" (UniqueName: \"kubernetes.io/projected/4108031b-93a3-489e-96a7-796f5c427d68-kube-api-access-pg6q8\") pod \"keystone-db-create-k7gmq\" (UID: \"4108031b-93a3-489e-96a7-796f5c427d68\") " pod="glance-kuttl-tests/keystone-db-create-k7gmq" Dec 13 04:04:15 crc kubenswrapper[4766]: I1213 04:04:15.043261 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg6q8\" (UniqueName: \"kubernetes.io/projected/4108031b-93a3-489e-96a7-796f5c427d68-kube-api-access-pg6q8\") pod \"keystone-db-create-k7gmq\" (UID: \"4108031b-93a3-489e-96a7-796f5c427d68\") " pod="glance-kuttl-tests/keystone-db-create-k7gmq" Dec 13 04:04:15 crc kubenswrapper[4766]: I1213 04:04:15.074856 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg6q8\" (UniqueName: \"kubernetes.io/projected/4108031b-93a3-489e-96a7-796f5c427d68-kube-api-access-pg6q8\") pod \"keystone-db-create-k7gmq\" (UID: \"4108031b-93a3-489e-96a7-796f5c427d68\") " pod="glance-kuttl-tests/keystone-db-create-k7gmq" Dec 13 04:04:15 crc kubenswrapper[4766]: I1213 04:04:15.138018 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-create-k7gmq" Dec 13 04:04:15 crc kubenswrapper[4766]: I1213 04:04:15.579952 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-db-create-k7gmq"] Dec 13 04:04:15 crc kubenswrapper[4766]: W1213 04:04:15.596709 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4108031b_93a3_489e_96a7_796f5c427d68.slice/crio-61cbdd3c9682b46c953a9566615694cd3787343691a2b022eb3e943a075d6c74 WatchSource:0}: Error finding container 61cbdd3c9682b46c953a9566615694cd3787343691a2b022eb3e943a075d6c74: Status 404 returned error can't find the container with id 61cbdd3c9682b46c953a9566615694cd3787343691a2b022eb3e943a075d6c74 Dec 13 04:04:15 crc kubenswrapper[4766]: I1213 04:04:15.839593 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-create-k7gmq" event={"ID":"4108031b-93a3-489e-96a7-796f5c427d68","Type":"ContainerStarted","Data":"61cbdd3c9682b46c953a9566615694cd3787343691a2b022eb3e943a075d6c74"} Dec 13 04:04:16 crc kubenswrapper[4766]: I1213 04:04:16.849146 4766 generic.go:334] "Generic (PLEG): container finished" podID="4108031b-93a3-489e-96a7-796f5c427d68" containerID="43f2be0e96e6ab87ca031d32e4077fbc05d070a18e2743051452bce2ee4087be" exitCode=0 Dec 13 04:04:16 crc kubenswrapper[4766]: I1213 04:04:16.849246 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-create-k7gmq" event={"ID":"4108031b-93a3-489e-96a7-796f5c427d68","Type":"ContainerDied","Data":"43f2be0e96e6ab87ca031d32e4077fbc05d070a18e2743051452bce2ee4087be"} Dec 13 04:04:17 crc kubenswrapper[4766]: I1213 04:04:17.624090 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-index-6lbrw"] Dec 13 04:04:17 crc kubenswrapper[4766]: I1213 04:04:17.624951 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-index-6lbrw" Dec 13 04:04:17 crc kubenswrapper[4766]: I1213 04:04:17.627158 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-index-6lbrw"] Dec 13 04:04:17 crc kubenswrapper[4766]: I1213 04:04:17.629630 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-index-dockercfg-2l4fq" Dec 13 04:04:17 crc kubenswrapper[4766]: I1213 04:04:17.795328 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qq5z\" (UniqueName: \"kubernetes.io/projected/610a5f6d-2863-42d4-9a95-024335d79560-kube-api-access-7qq5z\") pod \"horizon-operator-index-6lbrw\" (UID: \"610a5f6d-2863-42d4-9a95-024335d79560\") " pod="openstack-operators/horizon-operator-index-6lbrw" Dec 13 04:04:17 crc kubenswrapper[4766]: I1213 04:04:17.897086 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qq5z\" (UniqueName: \"kubernetes.io/projected/610a5f6d-2863-42d4-9a95-024335d79560-kube-api-access-7qq5z\") pod \"horizon-operator-index-6lbrw\" (UID: \"610a5f6d-2863-42d4-9a95-024335d79560\") " pod="openstack-operators/horizon-operator-index-6lbrw" Dec 13 04:04:17 crc kubenswrapper[4766]: I1213 04:04:17.918158 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qq5z\" (UniqueName: \"kubernetes.io/projected/610a5f6d-2863-42d4-9a95-024335d79560-kube-api-access-7qq5z\") pod \"horizon-operator-index-6lbrw\" (UID: \"610a5f6d-2863-42d4-9a95-024335d79560\") " pod="openstack-operators/horizon-operator-index-6lbrw" Dec 13 04:04:17 crc kubenswrapper[4766]: I1213 04:04:17.987158 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-index-6lbrw" Dec 13 04:04:18 crc kubenswrapper[4766]: I1213 04:04:18.121459 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-create-k7gmq" Dec 13 04:04:18 crc kubenswrapper[4766]: I1213 04:04:18.252957 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-index-6lbrw"] Dec 13 04:04:18 crc kubenswrapper[4766]: I1213 04:04:18.307977 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pg6q8\" (UniqueName: \"kubernetes.io/projected/4108031b-93a3-489e-96a7-796f5c427d68-kube-api-access-pg6q8\") pod \"4108031b-93a3-489e-96a7-796f5c427d68\" (UID: \"4108031b-93a3-489e-96a7-796f5c427d68\") " Dec 13 04:04:18 crc kubenswrapper[4766]: I1213 04:04:18.314763 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4108031b-93a3-489e-96a7-796f5c427d68-kube-api-access-pg6q8" (OuterVolumeSpecName: "kube-api-access-pg6q8") pod "4108031b-93a3-489e-96a7-796f5c427d68" (UID: "4108031b-93a3-489e-96a7-796f5c427d68"). InnerVolumeSpecName "kube-api-access-pg6q8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:04:18 crc kubenswrapper[4766]: I1213 04:04:18.410382 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pg6q8\" (UniqueName: \"kubernetes.io/projected/4108031b-93a3-489e-96a7-796f5c427d68-kube-api-access-pg6q8\") on node \"crc\" DevicePath \"\"" Dec 13 04:04:18 crc kubenswrapper[4766]: I1213 04:04:18.866265 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-index-6lbrw" event={"ID":"610a5f6d-2863-42d4-9a95-024335d79560","Type":"ContainerStarted","Data":"7f2cd3daff382c7be82e498c40993baf2be58bd823a4f4e885be85f064a04cab"} Dec 13 04:04:18 crc kubenswrapper[4766]: I1213 04:04:18.867872 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-create-k7gmq" event={"ID":"4108031b-93a3-489e-96a7-796f5c427d68","Type":"ContainerDied","Data":"61cbdd3c9682b46c953a9566615694cd3787343691a2b022eb3e943a075d6c74"} Dec 13 04:04:18 crc kubenswrapper[4766]: I1213 04:04:18.867912 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61cbdd3c9682b46c953a9566615694cd3787343691a2b022eb3e943a075d6c74" Dec 13 04:04:18 crc kubenswrapper[4766]: I1213 04:04:18.867935 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-create-k7gmq" Dec 13 04:04:19 crc kubenswrapper[4766]: I1213 04:04:19.875397 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-index-6lbrw" event={"ID":"610a5f6d-2863-42d4-9a95-024335d79560","Type":"ContainerStarted","Data":"b0505bdfbe2535112dc2f7ef5a72e1fd8bd32ff27f6b5810be5ca9d42320cc53"} Dec 13 04:04:19 crc kubenswrapper[4766]: I1213 04:04:19.889905 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-index-6lbrw" podStartSLOduration=1.446863482 podStartE2EDuration="2.889875269s" podCreationTimestamp="2025-12-13 04:04:17 +0000 UTC" firstStartedPulling="2025-12-13 04:04:18.262504005 +0000 UTC m=+1189.772436979" lastFinishedPulling="2025-12-13 04:04:19.705515812 +0000 UTC m=+1191.215448766" observedRunningTime="2025-12-13 04:04:19.888464568 +0000 UTC m=+1191.398397532" watchObservedRunningTime="2025-12-13 04:04:19.889875269 +0000 UTC m=+1191.399808243" Dec 13 04:04:20 crc kubenswrapper[4766]: I1213 04:04:20.822864 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-index-x6k42"] Dec 13 04:04:20 crc kubenswrapper[4766]: E1213 04:04:20.823129 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4108031b-93a3-489e-96a7-796f5c427d68" containerName="mariadb-database-create" Dec 13 04:04:20 crc kubenswrapper[4766]: I1213 04:04:20.823141 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4108031b-93a3-489e-96a7-796f5c427d68" containerName="mariadb-database-create" Dec 13 04:04:20 crc kubenswrapper[4766]: I1213 04:04:20.823265 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4108031b-93a3-489e-96a7-796f5c427d68" containerName="mariadb-database-create" Dec 13 04:04:20 crc kubenswrapper[4766]: I1213 04:04:20.823822 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-x6k42" Dec 13 04:04:20 crc kubenswrapper[4766]: I1213 04:04:20.832043 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-index-dockercfg-pzb68" Dec 13 04:04:20 crc kubenswrapper[4766]: I1213 04:04:20.848940 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-index-x6k42"] Dec 13 04:04:20 crc kubenswrapper[4766]: I1213 04:04:20.952088 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cmr6\" (UniqueName: \"kubernetes.io/projected/783f433d-69d6-4c52-8884-96bc07460269-kube-api-access-9cmr6\") pod \"swift-operator-index-x6k42\" (UID: \"783f433d-69d6-4c52-8884-96bc07460269\") " pod="openstack-operators/swift-operator-index-x6k42" Dec 13 04:04:21 crc kubenswrapper[4766]: I1213 04:04:21.054309 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cmr6\" (UniqueName: \"kubernetes.io/projected/783f433d-69d6-4c52-8884-96bc07460269-kube-api-access-9cmr6\") pod \"swift-operator-index-x6k42\" (UID: \"783f433d-69d6-4c52-8884-96bc07460269\") " pod="openstack-operators/swift-operator-index-x6k42" Dec 13 04:04:21 crc kubenswrapper[4766]: I1213 04:04:21.073097 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cmr6\" (UniqueName: \"kubernetes.io/projected/783f433d-69d6-4c52-8884-96bc07460269-kube-api-access-9cmr6\") pod \"swift-operator-index-x6k42\" (UID: \"783f433d-69d6-4c52-8884-96bc07460269\") " pod="openstack-operators/swift-operator-index-x6k42" Dec 13 04:04:21 crc kubenswrapper[4766]: I1213 04:04:21.167997 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-x6k42" Dec 13 04:04:21 crc kubenswrapper[4766]: I1213 04:04:21.652419 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-index-x6k42"] Dec 13 04:04:21 crc kubenswrapper[4766]: I1213 04:04:21.896648 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-x6k42" event={"ID":"783f433d-69d6-4c52-8884-96bc07460269","Type":"ContainerStarted","Data":"a22ff55c08d356e7d9bf55580686f52c037bf78726b363345f871c0ea6293de9"} Dec 13 04:04:23 crc kubenswrapper[4766]: I1213 04:04:23.913366 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-x6k42" event={"ID":"783f433d-69d6-4c52-8884-96bc07460269","Type":"ContainerStarted","Data":"4f8eb81a7ec2208cf5048a833c99104d74c02373f1ee417e93e21698759f71ef"} Dec 13 04:04:23 crc kubenswrapper[4766]: I1213 04:04:23.937304 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-index-x6k42" podStartSLOduration=2.418771697 podStartE2EDuration="3.93728199s" podCreationTimestamp="2025-12-13 04:04:20 +0000 UTC" firstStartedPulling="2025-12-13 04:04:21.665202245 +0000 UTC m=+1193.175135209" lastFinishedPulling="2025-12-13 04:04:23.183712538 +0000 UTC m=+1194.693645502" observedRunningTime="2025-12-13 04:04:23.932651216 +0000 UTC m=+1195.442584180" watchObservedRunningTime="2025-12-13 04:04:23.93728199 +0000 UTC m=+1195.447214944" Dec 13 04:04:24 crc kubenswrapper[4766]: I1213 04:04:24.717685 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystone-ff52-account-create-mx847"] Dec 13 04:04:24 crc kubenswrapper[4766]: I1213 04:04:24.718654 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-ff52-account-create-mx847" Dec 13 04:04:24 crc kubenswrapper[4766]: I1213 04:04:24.721791 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-db-secret" Dec 13 04:04:24 crc kubenswrapper[4766]: I1213 04:04:24.732421 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-ff52-account-create-mx847"] Dec 13 04:04:24 crc kubenswrapper[4766]: I1213 04:04:24.814017 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n55f\" (UniqueName: \"kubernetes.io/projected/1d412be4-09f0-40a1-bda1-fafd744e6d9a-kube-api-access-5n55f\") pod \"keystone-ff52-account-create-mx847\" (UID: \"1d412be4-09f0-40a1-bda1-fafd744e6d9a\") " pod="glance-kuttl-tests/keystone-ff52-account-create-mx847" Dec 13 04:04:24 crc kubenswrapper[4766]: I1213 04:04:24.915282 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n55f\" (UniqueName: \"kubernetes.io/projected/1d412be4-09f0-40a1-bda1-fafd744e6d9a-kube-api-access-5n55f\") pod \"keystone-ff52-account-create-mx847\" (UID: \"1d412be4-09f0-40a1-bda1-fafd744e6d9a\") " pod="glance-kuttl-tests/keystone-ff52-account-create-mx847" Dec 13 04:04:24 crc kubenswrapper[4766]: I1213 04:04:24.935594 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n55f\" (UniqueName: \"kubernetes.io/projected/1d412be4-09f0-40a1-bda1-fafd744e6d9a-kube-api-access-5n55f\") pod \"keystone-ff52-account-create-mx847\" (UID: \"1d412be4-09f0-40a1-bda1-fafd744e6d9a\") " pod="glance-kuttl-tests/keystone-ff52-account-create-mx847" Dec 13 04:04:25 crc kubenswrapper[4766]: I1213 04:04:25.034516 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-ff52-account-create-mx847" Dec 13 04:04:25 crc kubenswrapper[4766]: I1213 04:04:25.449163 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-ff52-account-create-mx847"] Dec 13 04:04:25 crc kubenswrapper[4766]: I1213 04:04:25.926586 4766 generic.go:334] "Generic (PLEG): container finished" podID="1d412be4-09f0-40a1-bda1-fafd744e6d9a" containerID="fd312189f795b21cd7df6f59132fa01a00627d6d78ec1d4e8606a353428c65f9" exitCode=0 Dec 13 04:04:25 crc kubenswrapper[4766]: I1213 04:04:25.926704 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-ff52-account-create-mx847" event={"ID":"1d412be4-09f0-40a1-bda1-fafd744e6d9a","Type":"ContainerDied","Data":"fd312189f795b21cd7df6f59132fa01a00627d6d78ec1d4e8606a353428c65f9"} Dec 13 04:04:25 crc kubenswrapper[4766]: I1213 04:04:25.927615 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-ff52-account-create-mx847" event={"ID":"1d412be4-09f0-40a1-bda1-fafd744e6d9a","Type":"ContainerStarted","Data":"6a9e4c0fef73500e783fa7f293b0b1ff419f0c6e07a87b14cc4602fa300e3a5f"} Dec 13 04:04:27 crc kubenswrapper[4766]: I1213 04:04:27.224832 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-ff52-account-create-mx847" Dec 13 04:04:27 crc kubenswrapper[4766]: I1213 04:04:27.352993 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n55f\" (UniqueName: \"kubernetes.io/projected/1d412be4-09f0-40a1-bda1-fafd744e6d9a-kube-api-access-5n55f\") pod \"1d412be4-09f0-40a1-bda1-fafd744e6d9a\" (UID: \"1d412be4-09f0-40a1-bda1-fafd744e6d9a\") " Dec 13 04:04:27 crc kubenswrapper[4766]: I1213 04:04:27.359760 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d412be4-09f0-40a1-bda1-fafd744e6d9a-kube-api-access-5n55f" (OuterVolumeSpecName: "kube-api-access-5n55f") pod "1d412be4-09f0-40a1-bda1-fafd744e6d9a" (UID: "1d412be4-09f0-40a1-bda1-fafd744e6d9a"). InnerVolumeSpecName "kube-api-access-5n55f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:04:27 crc kubenswrapper[4766]: I1213 04:04:27.454515 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n55f\" (UniqueName: \"kubernetes.io/projected/1d412be4-09f0-40a1-bda1-fafd744e6d9a-kube-api-access-5n55f\") on node \"crc\" DevicePath \"\"" Dec 13 04:04:27 crc kubenswrapper[4766]: I1213 04:04:27.946733 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-ff52-account-create-mx847" event={"ID":"1d412be4-09f0-40a1-bda1-fafd744e6d9a","Type":"ContainerDied","Data":"6a9e4c0fef73500e783fa7f293b0b1ff419f0c6e07a87b14cc4602fa300e3a5f"} Dec 13 04:04:27 crc kubenswrapper[4766]: I1213 04:04:27.947044 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a9e4c0fef73500e783fa7f293b0b1ff419f0c6e07a87b14cc4602fa300e3a5f" Dec 13 04:04:27 crc kubenswrapper[4766]: I1213 04:04:27.946818 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-ff52-account-create-mx847" Dec 13 04:04:27 crc kubenswrapper[4766]: I1213 04:04:27.988056 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/horizon-operator-index-6lbrw" Dec 13 04:04:27 crc kubenswrapper[4766]: I1213 04:04:27.988703 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-index-6lbrw" Dec 13 04:04:28 crc kubenswrapper[4766]: I1213 04:04:28.017647 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/horizon-operator-index-6lbrw" Dec 13 04:04:28 crc kubenswrapper[4766]: I1213 04:04:28.983591 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-index-6lbrw" Dec 13 04:04:30 crc kubenswrapper[4766]: I1213 04:04:30.281833 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystone-db-sync-7t8dp"] Dec 13 04:04:30 crc kubenswrapper[4766]: E1213 04:04:30.283084 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d412be4-09f0-40a1-bda1-fafd744e6d9a" containerName="mariadb-account-create" Dec 13 04:04:30 crc kubenswrapper[4766]: I1213 04:04:30.283116 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d412be4-09f0-40a1-bda1-fafd744e6d9a" containerName="mariadb-account-create" Dec 13 04:04:30 crc kubenswrapper[4766]: I1213 04:04:30.283574 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d412be4-09f0-40a1-bda1-fafd744e6d9a" containerName="mariadb-account-create" Dec 13 04:04:30 crc kubenswrapper[4766]: I1213 04:04:30.284334 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-sync-7t8dp" Dec 13 04:04:30 crc kubenswrapper[4766]: I1213 04:04:30.296829 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-scripts" Dec 13 04:04:30 crc kubenswrapper[4766]: I1213 04:04:30.297246 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-config-data" Dec 13 04:04:30 crc kubenswrapper[4766]: I1213 04:04:30.297650 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-keystone-dockercfg-5m4ff" Dec 13 04:04:30 crc kubenswrapper[4766]: I1213 04:04:30.297784 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone" Dec 13 04:04:30 crc kubenswrapper[4766]: I1213 04:04:30.316856 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-db-sync-7t8dp"] Dec 13 04:04:30 crc kubenswrapper[4766]: I1213 04:04:30.394457 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58tzx\" (UniqueName: \"kubernetes.io/projected/0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9-kube-api-access-58tzx\") pod \"keystone-db-sync-7t8dp\" (UID: \"0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9\") " pod="glance-kuttl-tests/keystone-db-sync-7t8dp" Dec 13 04:04:30 crc kubenswrapper[4766]: I1213 04:04:30.394558 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9-config-data\") pod \"keystone-db-sync-7t8dp\" (UID: \"0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9\") " pod="glance-kuttl-tests/keystone-db-sync-7t8dp" Dec 13 04:04:30 crc kubenswrapper[4766]: I1213 04:04:30.495986 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9-config-data\") pod \"keystone-db-sync-7t8dp\" (UID: \"0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9\") " pod="glance-kuttl-tests/keystone-db-sync-7t8dp" Dec 13 04:04:30 crc kubenswrapper[4766]: I1213 04:04:30.496211 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58tzx\" (UniqueName: \"kubernetes.io/projected/0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9-kube-api-access-58tzx\") pod \"keystone-db-sync-7t8dp\" (UID: \"0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9\") " pod="glance-kuttl-tests/keystone-db-sync-7t8dp" Dec 13 04:04:30 crc kubenswrapper[4766]: I1213 04:04:30.505838 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9-config-data\") pod \"keystone-db-sync-7t8dp\" (UID: \"0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9\") " pod="glance-kuttl-tests/keystone-db-sync-7t8dp" Dec 13 04:04:30 crc kubenswrapper[4766]: I1213 04:04:30.516154 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58tzx\" (UniqueName: \"kubernetes.io/projected/0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9-kube-api-access-58tzx\") pod \"keystone-db-sync-7t8dp\" (UID: \"0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9\") " pod="glance-kuttl-tests/keystone-db-sync-7t8dp" Dec 13 04:04:30 crc kubenswrapper[4766]: I1213 04:04:30.613186 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-sync-7t8dp" Dec 13 04:04:31 crc kubenswrapper[4766]: I1213 04:04:31.036653 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-db-sync-7t8dp"] Dec 13 04:04:31 crc kubenswrapper[4766]: I1213 04:04:31.168878 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/swift-operator-index-x6k42" Dec 13 04:04:31 crc kubenswrapper[4766]: I1213 04:04:31.170291 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-index-x6k42" Dec 13 04:04:31 crc kubenswrapper[4766]: I1213 04:04:31.202995 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/swift-operator-index-x6k42" Dec 13 04:04:31 crc kubenswrapper[4766]: I1213 04:04:31.972761 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-sync-7t8dp" event={"ID":"0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9","Type":"ContainerStarted","Data":"4e98afa8bda78d798ac980601f138af51f82d5264569c66b2ee602a67092937f"} Dec 13 04:04:32 crc kubenswrapper[4766]: I1213 04:04:32.003190 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-index-x6k42" Dec 13 04:04:36 crc kubenswrapper[4766]: I1213 04:04:36.673073 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2"] Dec 13 04:04:36 crc kubenswrapper[4766]: I1213 04:04:36.675083 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2" Dec 13 04:04:36 crc kubenswrapper[4766]: I1213 04:04:36.678928 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-vg794" Dec 13 04:04:36 crc kubenswrapper[4766]: I1213 04:04:36.683063 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2"] Dec 13 04:04:36 crc kubenswrapper[4766]: I1213 04:04:36.688922 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f510c8f4-ae74-4fce-af53-a743bdb3e2d3-util\") pod \"154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2\" (UID: \"f510c8f4-ae74-4fce-af53-a743bdb3e2d3\") " pod="openstack-operators/154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2" Dec 13 04:04:36 crc kubenswrapper[4766]: I1213 04:04:36.688995 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76lld\" (UniqueName: \"kubernetes.io/projected/f510c8f4-ae74-4fce-af53-a743bdb3e2d3-kube-api-access-76lld\") pod \"154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2\" (UID: \"f510c8f4-ae74-4fce-af53-a743bdb3e2d3\") " pod="openstack-operators/154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2" Dec 13 04:04:36 crc kubenswrapper[4766]: I1213 04:04:36.689052 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f510c8f4-ae74-4fce-af53-a743bdb3e2d3-bundle\") pod \"154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2\" (UID: \"f510c8f4-ae74-4fce-af53-a743bdb3e2d3\") " pod="openstack-operators/154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2" Dec 13 04:04:36 crc kubenswrapper[4766]: I1213 04:04:36.790491 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f510c8f4-ae74-4fce-af53-a743bdb3e2d3-util\") pod \"154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2\" (UID: \"f510c8f4-ae74-4fce-af53-a743bdb3e2d3\") " pod="openstack-operators/154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2" Dec 13 04:04:36 crc kubenswrapper[4766]: I1213 04:04:36.790591 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76lld\" (UniqueName: \"kubernetes.io/projected/f510c8f4-ae74-4fce-af53-a743bdb3e2d3-kube-api-access-76lld\") pod \"154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2\" (UID: \"f510c8f4-ae74-4fce-af53-a743bdb3e2d3\") " pod="openstack-operators/154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2" Dec 13 04:04:36 crc kubenswrapper[4766]: I1213 04:04:36.790635 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f510c8f4-ae74-4fce-af53-a743bdb3e2d3-bundle\") pod \"154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2\" (UID: \"f510c8f4-ae74-4fce-af53-a743bdb3e2d3\") " pod="openstack-operators/154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2" Dec 13 04:04:36 crc kubenswrapper[4766]: I1213 04:04:36.791134 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f510c8f4-ae74-4fce-af53-a743bdb3e2d3-util\") pod \"154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2\" (UID: \"f510c8f4-ae74-4fce-af53-a743bdb3e2d3\") " pod="openstack-operators/154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2" Dec 13 04:04:36 crc kubenswrapper[4766]: I1213 04:04:36.792288 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f510c8f4-ae74-4fce-af53-a743bdb3e2d3-bundle\") pod \"154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2\" (UID: \"f510c8f4-ae74-4fce-af53-a743bdb3e2d3\") " pod="openstack-operators/154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2" Dec 13 04:04:36 crc kubenswrapper[4766]: I1213 04:04:36.838750 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76lld\" (UniqueName: \"kubernetes.io/projected/f510c8f4-ae74-4fce-af53-a743bdb3e2d3-kube-api-access-76lld\") pod \"154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2\" (UID: \"f510c8f4-ae74-4fce-af53-a743bdb3e2d3\") " pod="openstack-operators/154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2" Dec 13 04:04:37 crc kubenswrapper[4766]: I1213 04:04:37.008140 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2" Dec 13 04:04:37 crc kubenswrapper[4766]: I1213 04:04:37.697144 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc"] Dec 13 04:04:37 crc kubenswrapper[4766]: I1213 04:04:37.698896 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc" Dec 13 04:04:37 crc kubenswrapper[4766]: I1213 04:04:37.716182 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc"] Dec 13 04:04:37 crc kubenswrapper[4766]: I1213 04:04:37.805293 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dcd5d2af-af90-4a46-9fe2-a70f544e7d66-util\") pod \"c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc\" (UID: \"dcd5d2af-af90-4a46-9fe2-a70f544e7d66\") " pod="openstack-operators/c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc" Dec 13 04:04:37 crc kubenswrapper[4766]: I1213 04:04:37.805418 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dcd5d2af-af90-4a46-9fe2-a70f544e7d66-bundle\") pod \"c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc\" (UID: \"dcd5d2af-af90-4a46-9fe2-a70f544e7d66\") " pod="openstack-operators/c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc" Dec 13 04:04:37 crc kubenswrapper[4766]: I1213 04:04:37.805618 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mznmg\" (UniqueName: \"kubernetes.io/projected/dcd5d2af-af90-4a46-9fe2-a70f544e7d66-kube-api-access-mznmg\") pod \"c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc\" (UID: \"dcd5d2af-af90-4a46-9fe2-a70f544e7d66\") " pod="openstack-operators/c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc" Dec 13 04:04:37 crc kubenswrapper[4766]: I1213 04:04:37.907344 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mznmg\" (UniqueName: \"kubernetes.io/projected/dcd5d2af-af90-4a46-9fe2-a70f544e7d66-kube-api-access-mznmg\") pod \"c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc\" (UID: \"dcd5d2af-af90-4a46-9fe2-a70f544e7d66\") " pod="openstack-operators/c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc" Dec 13 04:04:37 crc kubenswrapper[4766]: I1213 04:04:37.907885 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dcd5d2af-af90-4a46-9fe2-a70f544e7d66-util\") pod \"c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc\" (UID: \"dcd5d2af-af90-4a46-9fe2-a70f544e7d66\") " pod="openstack-operators/c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc" Dec 13 04:04:37 crc kubenswrapper[4766]: I1213 04:04:37.908401 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dcd5d2af-af90-4a46-9fe2-a70f544e7d66-util\") pod \"c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc\" (UID: \"dcd5d2af-af90-4a46-9fe2-a70f544e7d66\") " pod="openstack-operators/c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc" Dec 13 04:04:37 crc kubenswrapper[4766]: I1213 04:04:37.908499 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dcd5d2af-af90-4a46-9fe2-a70f544e7d66-bundle\") pod \"c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc\" (UID: \"dcd5d2af-af90-4a46-9fe2-a70f544e7d66\") " pod="openstack-operators/c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc" Dec 13 04:04:37 crc kubenswrapper[4766]: I1213 04:04:37.908842 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dcd5d2af-af90-4a46-9fe2-a70f544e7d66-bundle\") pod \"c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc\" (UID: \"dcd5d2af-af90-4a46-9fe2-a70f544e7d66\") " pod="openstack-operators/c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc" Dec 13 04:04:37 crc kubenswrapper[4766]: I1213 04:04:37.929403 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mznmg\" (UniqueName: \"kubernetes.io/projected/dcd5d2af-af90-4a46-9fe2-a70f544e7d66-kube-api-access-mznmg\") pod \"c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc\" (UID: \"dcd5d2af-af90-4a46-9fe2-a70f544e7d66\") " pod="openstack-operators/c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc" Dec 13 04:04:38 crc kubenswrapper[4766]: I1213 04:04:38.021565 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc" Dec 13 04:04:39 crc kubenswrapper[4766]: I1213 04:04:39.732375 4766 patch_prober.go:28] interesting pod/machine-config-daemon-94w9l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 04:04:39 crc kubenswrapper[4766]: I1213 04:04:39.732839 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 04:04:40 crc kubenswrapper[4766]: I1213 04:04:40.114716 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc"] Dec 13 04:04:40 crc kubenswrapper[4766]: W1213 04:04:40.120048 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcd5d2af_af90_4a46_9fe2_a70f544e7d66.slice/crio-e9cff74c7ceb220ceff439d59f6736c5f5d28ee8df0d23c7804af3a873f55d28 WatchSource:0}: Error finding container e9cff74c7ceb220ceff439d59f6736c5f5d28ee8df0d23c7804af3a873f55d28: Status 404 returned error can't find the container with id e9cff74c7ceb220ceff439d59f6736c5f5d28ee8df0d23c7804af3a873f55d28 Dec 13 04:04:40 crc kubenswrapper[4766]: I1213 04:04:40.283547 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2"] Dec 13 04:04:40 crc kubenswrapper[4766]: W1213 04:04:40.287589 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf510c8f4_ae74_4fce_af53_a743bdb3e2d3.slice/crio-b0aaef54d7dd4f0fae392efe24ea4897a8be32a1acf60531d1b20f5dcdef1204 WatchSource:0}: Error finding container b0aaef54d7dd4f0fae392efe24ea4897a8be32a1acf60531d1b20f5dcdef1204: Status 404 returned error can't find the container with id b0aaef54d7dd4f0fae392efe24ea4897a8be32a1acf60531d1b20f5dcdef1204 Dec 13 04:04:41 crc kubenswrapper[4766]: I1213 04:04:41.041639 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-sync-7t8dp" event={"ID":"0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9","Type":"ContainerStarted","Data":"d09165a3bc2c2ad1341a5eaf05a4a0c95e3bb4a64bd05401aca14b8d58758556"} Dec 13 04:04:41 crc kubenswrapper[4766]: I1213 04:04:41.043451 4766 generic.go:334] "Generic (PLEG): container finished" podID="f510c8f4-ae74-4fce-af53-a743bdb3e2d3" containerID="491b3f012d33acc2cd94353d029d48c2c2ca630a7ab12e71377530671d9da56e" exitCode=0 Dec 13 04:04:41 crc kubenswrapper[4766]: I1213 04:04:41.044443 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2" event={"ID":"f510c8f4-ae74-4fce-af53-a743bdb3e2d3","Type":"ContainerDied","Data":"491b3f012d33acc2cd94353d029d48c2c2ca630a7ab12e71377530671d9da56e"} Dec 13 04:04:41 crc kubenswrapper[4766]: I1213 04:04:41.044476 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2" event={"ID":"f510c8f4-ae74-4fce-af53-a743bdb3e2d3","Type":"ContainerStarted","Data":"b0aaef54d7dd4f0fae392efe24ea4897a8be32a1acf60531d1b20f5dcdef1204"} Dec 13 04:04:41 crc kubenswrapper[4766]: I1213 04:04:41.046628 4766 generic.go:334] "Generic (PLEG): container finished" podID="dcd5d2af-af90-4a46-9fe2-a70f544e7d66" containerID="bd51a44f22062d202dd7da9c814619964ceed39cee6b1e40eced8b57e0586d1c" exitCode=0 Dec 13 04:04:41 crc kubenswrapper[4766]: I1213 04:04:41.046652 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc" event={"ID":"dcd5d2af-af90-4a46-9fe2-a70f544e7d66","Type":"ContainerDied","Data":"bd51a44f22062d202dd7da9c814619964ceed39cee6b1e40eced8b57e0586d1c"} Dec 13 04:04:41 crc kubenswrapper[4766]: I1213 04:04:41.046666 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc" event={"ID":"dcd5d2af-af90-4a46-9fe2-a70f544e7d66","Type":"ContainerStarted","Data":"e9cff74c7ceb220ceff439d59f6736c5f5d28ee8df0d23c7804af3a873f55d28"} Dec 13 04:04:41 crc kubenswrapper[4766]: I1213 04:04:41.066936 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/keystone-db-sync-7t8dp" podStartSLOduration=2.242095061 podStartE2EDuration="11.066918421s" podCreationTimestamp="2025-12-13 04:04:30 +0000 UTC" firstStartedPulling="2025-12-13 04:04:31.041661404 +0000 UTC m=+1202.551594368" lastFinishedPulling="2025-12-13 04:04:39.866484764 +0000 UTC m=+1211.376417728" observedRunningTime="2025-12-13 04:04:41.064863871 +0000 UTC m=+1212.574796835" watchObservedRunningTime="2025-12-13 04:04:41.066918421 +0000 UTC m=+1212.576851385" Dec 13 04:04:44 crc kubenswrapper[4766]: I1213 04:04:44.068603 4766 generic.go:334] "Generic (PLEG): container finished" podID="f510c8f4-ae74-4fce-af53-a743bdb3e2d3" containerID="8e87b8180f47c1fa0157993472862cd885032dcb82b5941dfb1f05643844dcd3" exitCode=0 Dec 13 04:04:44 crc kubenswrapper[4766]: I1213 04:04:44.068711 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2" event={"ID":"f510c8f4-ae74-4fce-af53-a743bdb3e2d3","Type":"ContainerDied","Data":"8e87b8180f47c1fa0157993472862cd885032dcb82b5941dfb1f05643844dcd3"} Dec 13 04:04:44 crc kubenswrapper[4766]: I1213 04:04:44.071124 4766 generic.go:334] "Generic (PLEG): container finished" podID="dcd5d2af-af90-4a46-9fe2-a70f544e7d66" containerID="971f29024a7815d766c3263a12b44ae80834aa1443e03e3c5806914f6d222f33" exitCode=0 Dec 13 04:04:44 crc kubenswrapper[4766]: I1213 04:04:44.071160 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc" event={"ID":"dcd5d2af-af90-4a46-9fe2-a70f544e7d66","Type":"ContainerDied","Data":"971f29024a7815d766c3263a12b44ae80834aa1443e03e3c5806914f6d222f33"} Dec 13 04:04:45 crc kubenswrapper[4766]: I1213 04:04:45.079233 4766 generic.go:334] "Generic (PLEG): container finished" podID="f510c8f4-ae74-4fce-af53-a743bdb3e2d3" containerID="367182be38e70abfeaa1acc6023195d61ddb0dcb9b8b457ff151f93d33869998" exitCode=0 Dec 13 04:04:45 crc kubenswrapper[4766]: I1213 04:04:45.079314 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2" event={"ID":"f510c8f4-ae74-4fce-af53-a743bdb3e2d3","Type":"ContainerDied","Data":"367182be38e70abfeaa1acc6023195d61ddb0dcb9b8b457ff151f93d33869998"} Dec 13 04:04:45 crc kubenswrapper[4766]: I1213 04:04:45.082044 4766 generic.go:334] "Generic (PLEG): container finished" podID="dcd5d2af-af90-4a46-9fe2-a70f544e7d66" containerID="de67f76bde6676009f8cc4415100911160d99a3e7c9fd4ed0a696d5e0ea3a6a3" exitCode=0 Dec 13 04:04:45 crc kubenswrapper[4766]: I1213 04:04:45.082075 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc" event={"ID":"dcd5d2af-af90-4a46-9fe2-a70f544e7d66","Type":"ContainerDied","Data":"de67f76bde6676009f8cc4415100911160d99a3e7c9fd4ed0a696d5e0ea3a6a3"} Dec 13 04:04:46 crc kubenswrapper[4766]: I1213 04:04:46.091107 4766 generic.go:334] "Generic (PLEG): container finished" podID="0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9" containerID="d09165a3bc2c2ad1341a5eaf05a4a0c95e3bb4a64bd05401aca14b8d58758556" exitCode=0 Dec 13 04:04:46 crc kubenswrapper[4766]: I1213 04:04:46.091170 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-sync-7t8dp" event={"ID":"0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9","Type":"ContainerDied","Data":"d09165a3bc2c2ad1341a5eaf05a4a0c95e3bb4a64bd05401aca14b8d58758556"} Dec 13 04:04:46 crc kubenswrapper[4766]: I1213 04:04:46.430549 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc" Dec 13 04:04:46 crc kubenswrapper[4766]: I1213 04:04:46.435261 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2" Dec 13 04:04:46 crc kubenswrapper[4766]: I1213 04:04:46.566362 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76lld\" (UniqueName: \"kubernetes.io/projected/f510c8f4-ae74-4fce-af53-a743bdb3e2d3-kube-api-access-76lld\") pod \"f510c8f4-ae74-4fce-af53-a743bdb3e2d3\" (UID: \"f510c8f4-ae74-4fce-af53-a743bdb3e2d3\") " Dec 13 04:04:46 crc kubenswrapper[4766]: I1213 04:04:46.566512 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f510c8f4-ae74-4fce-af53-a743bdb3e2d3-bundle\") pod \"f510c8f4-ae74-4fce-af53-a743bdb3e2d3\" (UID: \"f510c8f4-ae74-4fce-af53-a743bdb3e2d3\") " Dec 13 04:04:46 crc kubenswrapper[4766]: I1213 04:04:46.566618 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mznmg\" (UniqueName: \"kubernetes.io/projected/dcd5d2af-af90-4a46-9fe2-a70f544e7d66-kube-api-access-mznmg\") pod \"dcd5d2af-af90-4a46-9fe2-a70f544e7d66\" (UID: \"dcd5d2af-af90-4a46-9fe2-a70f544e7d66\") " Dec 13 04:04:46 crc kubenswrapper[4766]: I1213 04:04:46.566643 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dcd5d2af-af90-4a46-9fe2-a70f544e7d66-bundle\") pod \"dcd5d2af-af90-4a46-9fe2-a70f544e7d66\" (UID: \"dcd5d2af-af90-4a46-9fe2-a70f544e7d66\") " Dec 13 04:04:46 crc kubenswrapper[4766]: I1213 04:04:46.566685 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dcd5d2af-af90-4a46-9fe2-a70f544e7d66-util\") pod \"dcd5d2af-af90-4a46-9fe2-a70f544e7d66\" (UID: \"dcd5d2af-af90-4a46-9fe2-a70f544e7d66\") " Dec 13 04:04:46 crc kubenswrapper[4766]: I1213 04:04:46.566740 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f510c8f4-ae74-4fce-af53-a743bdb3e2d3-util\") pod \"f510c8f4-ae74-4fce-af53-a743bdb3e2d3\" (UID: \"f510c8f4-ae74-4fce-af53-a743bdb3e2d3\") " Dec 13 04:04:46 crc kubenswrapper[4766]: I1213 04:04:46.568159 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f510c8f4-ae74-4fce-af53-a743bdb3e2d3-bundle" (OuterVolumeSpecName: "bundle") pod "f510c8f4-ae74-4fce-af53-a743bdb3e2d3" (UID: "f510c8f4-ae74-4fce-af53-a743bdb3e2d3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:04:46 crc kubenswrapper[4766]: I1213 04:04:46.568280 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcd5d2af-af90-4a46-9fe2-a70f544e7d66-bundle" (OuterVolumeSpecName: "bundle") pod "dcd5d2af-af90-4a46-9fe2-a70f544e7d66" (UID: "dcd5d2af-af90-4a46-9fe2-a70f544e7d66"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:04:46 crc kubenswrapper[4766]: I1213 04:04:46.579750 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd5d2af-af90-4a46-9fe2-a70f544e7d66-kube-api-access-mznmg" (OuterVolumeSpecName: "kube-api-access-mznmg") pod "dcd5d2af-af90-4a46-9fe2-a70f544e7d66" (UID: "dcd5d2af-af90-4a46-9fe2-a70f544e7d66"). InnerVolumeSpecName "kube-api-access-mznmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:04:46 crc kubenswrapper[4766]: I1213 04:04:46.579818 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f510c8f4-ae74-4fce-af53-a743bdb3e2d3-kube-api-access-76lld" (OuterVolumeSpecName: "kube-api-access-76lld") pod "f510c8f4-ae74-4fce-af53-a743bdb3e2d3" (UID: "f510c8f4-ae74-4fce-af53-a743bdb3e2d3"). InnerVolumeSpecName "kube-api-access-76lld". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:04:46 crc kubenswrapper[4766]: I1213 04:04:46.580169 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f510c8f4-ae74-4fce-af53-a743bdb3e2d3-util" (OuterVolumeSpecName: "util") pod "f510c8f4-ae74-4fce-af53-a743bdb3e2d3" (UID: "f510c8f4-ae74-4fce-af53-a743bdb3e2d3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:04:46 crc kubenswrapper[4766]: I1213 04:04:46.585853 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcd5d2af-af90-4a46-9fe2-a70f544e7d66-util" (OuterVolumeSpecName: "util") pod "dcd5d2af-af90-4a46-9fe2-a70f544e7d66" (UID: "dcd5d2af-af90-4a46-9fe2-a70f544e7d66"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:04:46 crc kubenswrapper[4766]: I1213 04:04:46.667906 4766 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f510c8f4-ae74-4fce-af53-a743bdb3e2d3-util\") on node \"crc\" DevicePath \"\"" Dec 13 04:04:46 crc kubenswrapper[4766]: I1213 04:04:46.667948 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76lld\" (UniqueName: \"kubernetes.io/projected/f510c8f4-ae74-4fce-af53-a743bdb3e2d3-kube-api-access-76lld\") on node \"crc\" DevicePath \"\"" Dec 13 04:04:46 crc kubenswrapper[4766]: I1213 04:04:46.667961 4766 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f510c8f4-ae74-4fce-af53-a743bdb3e2d3-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 04:04:46 crc kubenswrapper[4766]: I1213 04:04:46.667972 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mznmg\" (UniqueName: \"kubernetes.io/projected/dcd5d2af-af90-4a46-9fe2-a70f544e7d66-kube-api-access-mznmg\") on node \"crc\" DevicePath \"\"" Dec 13 04:04:46 crc kubenswrapper[4766]: I1213 04:04:46.667980 4766 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dcd5d2af-af90-4a46-9fe2-a70f544e7d66-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 04:04:46 crc kubenswrapper[4766]: I1213 04:04:46.667987 4766 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dcd5d2af-af90-4a46-9fe2-a70f544e7d66-util\") on node \"crc\" DevicePath \"\"" Dec 13 04:04:47 crc kubenswrapper[4766]: I1213 04:04:47.100286 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2" event={"ID":"f510c8f4-ae74-4fce-af53-a743bdb3e2d3","Type":"ContainerDied","Data":"b0aaef54d7dd4f0fae392efe24ea4897a8be32a1acf60531d1b20f5dcdef1204"} Dec 13 04:04:47 crc kubenswrapper[4766]: I1213 04:04:47.100799 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0aaef54d7dd4f0fae392efe24ea4897a8be32a1acf60531d1b20f5dcdef1204" Dec 13 04:04:47 crc kubenswrapper[4766]: I1213 04:04:47.100316 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2" Dec 13 04:04:47 crc kubenswrapper[4766]: I1213 04:04:47.104117 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc" event={"ID":"dcd5d2af-af90-4a46-9fe2-a70f544e7d66","Type":"ContainerDied","Data":"e9cff74c7ceb220ceff439d59f6736c5f5d28ee8df0d23c7804af3a873f55d28"} Dec 13 04:04:47 crc kubenswrapper[4766]: I1213 04:04:47.104176 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9cff74c7ceb220ceff439d59f6736c5f5d28ee8df0d23c7804af3a873f55d28" Dec 13 04:04:47 crc kubenswrapper[4766]: I1213 04:04:47.104417 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc" Dec 13 04:04:47 crc kubenswrapper[4766]: I1213 04:04:47.373322 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-sync-7t8dp" Dec 13 04:04:47 crc kubenswrapper[4766]: I1213 04:04:47.478250 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9-config-data\") pod \"0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9\" (UID: \"0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9\") " Dec 13 04:04:47 crc kubenswrapper[4766]: I1213 04:04:47.478484 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58tzx\" (UniqueName: \"kubernetes.io/projected/0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9-kube-api-access-58tzx\") pod \"0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9\" (UID: \"0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9\") " Dec 13 04:04:47 crc kubenswrapper[4766]: I1213 04:04:47.482839 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9-kube-api-access-58tzx" (OuterVolumeSpecName: "kube-api-access-58tzx") pod "0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9" (UID: "0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9"). InnerVolumeSpecName "kube-api-access-58tzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:04:47 crc kubenswrapper[4766]: I1213 04:04:47.526467 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9-config-data" (OuterVolumeSpecName: "config-data") pod "0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9" (UID: "0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:04:47 crc kubenswrapper[4766]: I1213 04:04:47.580465 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58tzx\" (UniqueName: \"kubernetes.io/projected/0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9-kube-api-access-58tzx\") on node \"crc\" DevicePath \"\"" Dec 13 04:04:47 crc kubenswrapper[4766]: I1213 04:04:47.580511 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9-config-data\") on node \"crc\" DevicePath \"\"" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.112832 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-sync-7t8dp" event={"ID":"0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9","Type":"ContainerDied","Data":"4e98afa8bda78d798ac980601f138af51f82d5264569c66b2ee602a67092937f"} Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.113174 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e98afa8bda78d798ac980601f138af51f82d5264569c66b2ee602a67092937f" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.112930 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-sync-7t8dp" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.302840 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystone-bootstrap-jghjb"] Dec 13 04:04:48 crc kubenswrapper[4766]: E1213 04:04:48.303276 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcd5d2af-af90-4a46-9fe2-a70f544e7d66" containerName="extract" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.303303 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcd5d2af-af90-4a46-9fe2-a70f544e7d66" containerName="extract" Dec 13 04:04:48 crc kubenswrapper[4766]: E1213 04:04:48.303317 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f510c8f4-ae74-4fce-af53-a743bdb3e2d3" containerName="pull" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.303326 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f510c8f4-ae74-4fce-af53-a743bdb3e2d3" containerName="pull" Dec 13 04:04:48 crc kubenswrapper[4766]: E1213 04:04:48.303337 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9" containerName="keystone-db-sync" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.303348 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9" containerName="keystone-db-sync" Dec 13 04:04:48 crc kubenswrapper[4766]: E1213 04:04:48.303366 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcd5d2af-af90-4a46-9fe2-a70f544e7d66" containerName="pull" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.303373 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcd5d2af-af90-4a46-9fe2-a70f544e7d66" containerName="pull" Dec 13 04:04:48 crc kubenswrapper[4766]: E1213 04:04:48.303388 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcd5d2af-af90-4a46-9fe2-a70f544e7d66" containerName="util" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.303396 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcd5d2af-af90-4a46-9fe2-a70f544e7d66" containerName="util" Dec 13 04:04:48 crc kubenswrapper[4766]: E1213 04:04:48.303407 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f510c8f4-ae74-4fce-af53-a743bdb3e2d3" containerName="util" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.303415 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f510c8f4-ae74-4fce-af53-a743bdb3e2d3" containerName="util" Dec 13 04:04:48 crc kubenswrapper[4766]: E1213 04:04:48.303507 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f510c8f4-ae74-4fce-af53-a743bdb3e2d3" containerName="extract" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.303518 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f510c8f4-ae74-4fce-af53-a743bdb3e2d3" containerName="extract" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.303677 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9" containerName="keystone-db-sync" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.303695 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f510c8f4-ae74-4fce-af53-a743bdb3e2d3" containerName="extract" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.303709 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcd5d2af-af90-4a46-9fe2-a70f544e7d66" containerName="extract" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.304301 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-bootstrap-jghjb" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.309154 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-bootstrap-jghjb"] Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.312638 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-keystone-dockercfg-5m4ff" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.313506 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.313689 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-scripts" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.313916 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-config-data" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.493798 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/02c1be2a-b38f-480c-83a8-2225cd2ed80a-fernet-keys\") pod \"keystone-bootstrap-jghjb\" (UID: \"02c1be2a-b38f-480c-83a8-2225cd2ed80a\") " pod="glance-kuttl-tests/keystone-bootstrap-jghjb" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.493917 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/02c1be2a-b38f-480c-83a8-2225cd2ed80a-credential-keys\") pod \"keystone-bootstrap-jghjb\" (UID: \"02c1be2a-b38f-480c-83a8-2225cd2ed80a\") " pod="glance-kuttl-tests/keystone-bootstrap-jghjb" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.493977 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02c1be2a-b38f-480c-83a8-2225cd2ed80a-config-data\") pod \"keystone-bootstrap-jghjb\" (UID: \"02c1be2a-b38f-480c-83a8-2225cd2ed80a\") " pod="glance-kuttl-tests/keystone-bootstrap-jghjb" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.494006 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02c1be2a-b38f-480c-83a8-2225cd2ed80a-scripts\") pod \"keystone-bootstrap-jghjb\" (UID: \"02c1be2a-b38f-480c-83a8-2225cd2ed80a\") " pod="glance-kuttl-tests/keystone-bootstrap-jghjb" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.494133 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdx6v\" (UniqueName: \"kubernetes.io/projected/02c1be2a-b38f-480c-83a8-2225cd2ed80a-kube-api-access-kdx6v\") pod \"keystone-bootstrap-jghjb\" (UID: \"02c1be2a-b38f-480c-83a8-2225cd2ed80a\") " pod="glance-kuttl-tests/keystone-bootstrap-jghjb" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.595883 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02c1be2a-b38f-480c-83a8-2225cd2ed80a-config-data\") pod \"keystone-bootstrap-jghjb\" (UID: \"02c1be2a-b38f-480c-83a8-2225cd2ed80a\") " pod="glance-kuttl-tests/keystone-bootstrap-jghjb" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.595947 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02c1be2a-b38f-480c-83a8-2225cd2ed80a-scripts\") pod \"keystone-bootstrap-jghjb\" (UID: \"02c1be2a-b38f-480c-83a8-2225cd2ed80a\") " pod="glance-kuttl-tests/keystone-bootstrap-jghjb" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.596383 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdx6v\" (UniqueName: \"kubernetes.io/projected/02c1be2a-b38f-480c-83a8-2225cd2ed80a-kube-api-access-kdx6v\") pod \"keystone-bootstrap-jghjb\" (UID: \"02c1be2a-b38f-480c-83a8-2225cd2ed80a\") " pod="glance-kuttl-tests/keystone-bootstrap-jghjb" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.596413 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/02c1be2a-b38f-480c-83a8-2225cd2ed80a-fernet-keys\") pod \"keystone-bootstrap-jghjb\" (UID: \"02c1be2a-b38f-480c-83a8-2225cd2ed80a\") " pod="glance-kuttl-tests/keystone-bootstrap-jghjb" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.596480 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/02c1be2a-b38f-480c-83a8-2225cd2ed80a-credential-keys\") pod \"keystone-bootstrap-jghjb\" (UID: \"02c1be2a-b38f-480c-83a8-2225cd2ed80a\") " pod="glance-kuttl-tests/keystone-bootstrap-jghjb" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.603799 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02c1be2a-b38f-480c-83a8-2225cd2ed80a-scripts\") pod \"keystone-bootstrap-jghjb\" (UID: \"02c1be2a-b38f-480c-83a8-2225cd2ed80a\") " pod="glance-kuttl-tests/keystone-bootstrap-jghjb" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.606208 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02c1be2a-b38f-480c-83a8-2225cd2ed80a-config-data\") pod \"keystone-bootstrap-jghjb\" (UID: \"02c1be2a-b38f-480c-83a8-2225cd2ed80a\") " pod="glance-kuttl-tests/keystone-bootstrap-jghjb" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.606908 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/02c1be2a-b38f-480c-83a8-2225cd2ed80a-fernet-keys\") pod \"keystone-bootstrap-jghjb\" (UID: \"02c1be2a-b38f-480c-83a8-2225cd2ed80a\") " pod="glance-kuttl-tests/keystone-bootstrap-jghjb" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.613356 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/02c1be2a-b38f-480c-83a8-2225cd2ed80a-credential-keys\") pod \"keystone-bootstrap-jghjb\" (UID: \"02c1be2a-b38f-480c-83a8-2225cd2ed80a\") " pod="glance-kuttl-tests/keystone-bootstrap-jghjb" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.616469 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdx6v\" (UniqueName: \"kubernetes.io/projected/02c1be2a-b38f-480c-83a8-2225cd2ed80a-kube-api-access-kdx6v\") pod \"keystone-bootstrap-jghjb\" (UID: \"02c1be2a-b38f-480c-83a8-2225cd2ed80a\") " pod="glance-kuttl-tests/keystone-bootstrap-jghjb" Dec 13 04:04:48 crc kubenswrapper[4766]: I1213 04:04:48.628510 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-bootstrap-jghjb" Dec 13 04:04:49 crc kubenswrapper[4766]: I1213 04:04:49.177669 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-bootstrap-jghjb"] Dec 13 04:04:50 crc kubenswrapper[4766]: I1213 04:04:50.129503 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-bootstrap-jghjb" event={"ID":"02c1be2a-b38f-480c-83a8-2225cd2ed80a","Type":"ContainerStarted","Data":"6ff5bf7be4abeec0c4e73da37ef7cea156afa6b16e83f2065da50e4392fc0fae"} Dec 13 04:04:51 crc kubenswrapper[4766]: I1213 04:04:51.137624 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-bootstrap-jghjb" event={"ID":"02c1be2a-b38f-480c-83a8-2225cd2ed80a","Type":"ContainerStarted","Data":"628c79df45d943b9d96ca41ca0297013ba36bff3fe20e5c04e541b32fee7d58d"} Dec 13 04:04:51 crc kubenswrapper[4766]: I1213 04:04:51.160231 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/keystone-bootstrap-jghjb" podStartSLOduration=3.160213647 podStartE2EDuration="3.160213647s" podCreationTimestamp="2025-12-13 04:04:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 04:04:51.154973946 +0000 UTC m=+1222.664906920" watchObservedRunningTime="2025-12-13 04:04:51.160213647 +0000 UTC m=+1222.670146611" Dec 13 04:04:54 crc kubenswrapper[4766]: I1213 04:04:54.920497 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-f7949f797-6ctkv"] Dec 13 04:04:54 crc kubenswrapper[4766]: I1213 04:04:54.922634 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-f7949f797-6ctkv" Dec 13 04:04:54 crc kubenswrapper[4766]: I1213 04:04:54.929979 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-88xsv" Dec 13 04:04:54 crc kubenswrapper[4766]: I1213 04:04:54.930060 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-service-cert" Dec 13 04:04:54 crc kubenswrapper[4766]: I1213 04:04:54.940565 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-f7949f797-6ctkv"] Dec 13 04:04:55 crc kubenswrapper[4766]: I1213 04:04:55.018979 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7ed09af0-6061-4c7d-9c33-82e4c82a24ac-apiservice-cert\") pod \"horizon-operator-controller-manager-f7949f797-6ctkv\" (UID: \"7ed09af0-6061-4c7d-9c33-82e4c82a24ac\") " pod="openstack-operators/horizon-operator-controller-manager-f7949f797-6ctkv" Dec 13 04:04:55 crc kubenswrapper[4766]: I1213 04:04:55.019410 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7ed09af0-6061-4c7d-9c33-82e4c82a24ac-webhook-cert\") pod \"horizon-operator-controller-manager-f7949f797-6ctkv\" (UID: \"7ed09af0-6061-4c7d-9c33-82e4c82a24ac\") " pod="openstack-operators/horizon-operator-controller-manager-f7949f797-6ctkv" Dec 13 04:04:55 crc kubenswrapper[4766]: I1213 04:04:55.019615 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr75z\" (UniqueName: \"kubernetes.io/projected/7ed09af0-6061-4c7d-9c33-82e4c82a24ac-kube-api-access-fr75z\") pod \"horizon-operator-controller-manager-f7949f797-6ctkv\" (UID: \"7ed09af0-6061-4c7d-9c33-82e4c82a24ac\") " pod="openstack-operators/horizon-operator-controller-manager-f7949f797-6ctkv" Dec 13 04:04:55 crc kubenswrapper[4766]: I1213 04:04:55.121808 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr75z\" (UniqueName: \"kubernetes.io/projected/7ed09af0-6061-4c7d-9c33-82e4c82a24ac-kube-api-access-fr75z\") pod \"horizon-operator-controller-manager-f7949f797-6ctkv\" (UID: \"7ed09af0-6061-4c7d-9c33-82e4c82a24ac\") " pod="openstack-operators/horizon-operator-controller-manager-f7949f797-6ctkv" Dec 13 04:04:55 crc kubenswrapper[4766]: I1213 04:04:55.121867 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7ed09af0-6061-4c7d-9c33-82e4c82a24ac-webhook-cert\") pod \"horizon-operator-controller-manager-f7949f797-6ctkv\" (UID: \"7ed09af0-6061-4c7d-9c33-82e4c82a24ac\") " pod="openstack-operators/horizon-operator-controller-manager-f7949f797-6ctkv" Dec 13 04:04:55 crc kubenswrapper[4766]: I1213 04:04:55.121964 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7ed09af0-6061-4c7d-9c33-82e4c82a24ac-apiservice-cert\") pod \"horizon-operator-controller-manager-f7949f797-6ctkv\" (UID: \"7ed09af0-6061-4c7d-9c33-82e4c82a24ac\") " pod="openstack-operators/horizon-operator-controller-manager-f7949f797-6ctkv" Dec 13 04:04:55 crc kubenswrapper[4766]: I1213 04:04:55.129516 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7ed09af0-6061-4c7d-9c33-82e4c82a24ac-apiservice-cert\") pod \"horizon-operator-controller-manager-f7949f797-6ctkv\" (UID: \"7ed09af0-6061-4c7d-9c33-82e4c82a24ac\") " pod="openstack-operators/horizon-operator-controller-manager-f7949f797-6ctkv" Dec 13 04:04:55 crc kubenswrapper[4766]: I1213 04:04:55.139110 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7ed09af0-6061-4c7d-9c33-82e4c82a24ac-webhook-cert\") pod \"horizon-operator-controller-manager-f7949f797-6ctkv\" (UID: \"7ed09af0-6061-4c7d-9c33-82e4c82a24ac\") " pod="openstack-operators/horizon-operator-controller-manager-f7949f797-6ctkv" Dec 13 04:04:55 crc kubenswrapper[4766]: I1213 04:04:55.143777 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr75z\" (UniqueName: \"kubernetes.io/projected/7ed09af0-6061-4c7d-9c33-82e4c82a24ac-kube-api-access-fr75z\") pod \"horizon-operator-controller-manager-f7949f797-6ctkv\" (UID: \"7ed09af0-6061-4c7d-9c33-82e4c82a24ac\") " pod="openstack-operators/horizon-operator-controller-manager-f7949f797-6ctkv" Dec 13 04:04:55 crc kubenswrapper[4766]: I1213 04:04:55.257483 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-f7949f797-6ctkv" Dec 13 04:04:55 crc kubenswrapper[4766]: I1213 04:04:55.883775 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-f7949f797-6ctkv"] Dec 13 04:04:56 crc kubenswrapper[4766]: I1213 04:04:56.261170 4766 generic.go:334] "Generic (PLEG): container finished" podID="02c1be2a-b38f-480c-83a8-2225cd2ed80a" containerID="628c79df45d943b9d96ca41ca0297013ba36bff3fe20e5c04e541b32fee7d58d" exitCode=0 Dec 13 04:04:56 crc kubenswrapper[4766]: I1213 04:04:56.261339 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-bootstrap-jghjb" event={"ID":"02c1be2a-b38f-480c-83a8-2225cd2ed80a","Type":"ContainerDied","Data":"628c79df45d943b9d96ca41ca0297013ba36bff3fe20e5c04e541b32fee7d58d"} Dec 13 04:04:56 crc kubenswrapper[4766]: I1213 04:04:56.265816 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-f7949f797-6ctkv" event={"ID":"7ed09af0-6061-4c7d-9c33-82e4c82a24ac","Type":"ContainerStarted","Data":"7e5388a8f06aede52f993ab311378d8b6f37a7d55b1106e2fed4c377f1812e9b"} Dec 13 04:04:57 crc kubenswrapper[4766]: I1213 04:04:57.731250 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-bootstrap-jghjb" Dec 13 04:04:57 crc kubenswrapper[4766]: I1213 04:04:57.852820 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02c1be2a-b38f-480c-83a8-2225cd2ed80a-config-data\") pod \"02c1be2a-b38f-480c-83a8-2225cd2ed80a\" (UID: \"02c1be2a-b38f-480c-83a8-2225cd2ed80a\") " Dec 13 04:04:57 crc kubenswrapper[4766]: I1213 04:04:57.853502 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdx6v\" (UniqueName: \"kubernetes.io/projected/02c1be2a-b38f-480c-83a8-2225cd2ed80a-kube-api-access-kdx6v\") pod \"02c1be2a-b38f-480c-83a8-2225cd2ed80a\" (UID: \"02c1be2a-b38f-480c-83a8-2225cd2ed80a\") " Dec 13 04:04:57 crc kubenswrapper[4766]: I1213 04:04:57.853572 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02c1be2a-b38f-480c-83a8-2225cd2ed80a-scripts\") pod \"02c1be2a-b38f-480c-83a8-2225cd2ed80a\" (UID: \"02c1be2a-b38f-480c-83a8-2225cd2ed80a\") " Dec 13 04:04:57 crc kubenswrapper[4766]: I1213 04:04:57.853616 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/02c1be2a-b38f-480c-83a8-2225cd2ed80a-credential-keys\") pod \"02c1be2a-b38f-480c-83a8-2225cd2ed80a\" (UID: \"02c1be2a-b38f-480c-83a8-2225cd2ed80a\") " Dec 13 04:04:57 crc kubenswrapper[4766]: I1213 04:04:57.853664 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/02c1be2a-b38f-480c-83a8-2225cd2ed80a-fernet-keys\") pod \"02c1be2a-b38f-480c-83a8-2225cd2ed80a\" (UID: \"02c1be2a-b38f-480c-83a8-2225cd2ed80a\") " Dec 13 04:04:57 crc kubenswrapper[4766]: I1213 04:04:57.868342 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02c1be2a-b38f-480c-83a8-2225cd2ed80a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "02c1be2a-b38f-480c-83a8-2225cd2ed80a" (UID: "02c1be2a-b38f-480c-83a8-2225cd2ed80a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:04:57 crc kubenswrapper[4766]: I1213 04:04:57.875763 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02c1be2a-b38f-480c-83a8-2225cd2ed80a-kube-api-access-kdx6v" (OuterVolumeSpecName: "kube-api-access-kdx6v") pod "02c1be2a-b38f-480c-83a8-2225cd2ed80a" (UID: "02c1be2a-b38f-480c-83a8-2225cd2ed80a"). InnerVolumeSpecName "kube-api-access-kdx6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:04:57 crc kubenswrapper[4766]: I1213 04:04:57.881651 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02c1be2a-b38f-480c-83a8-2225cd2ed80a-scripts" (OuterVolumeSpecName: "scripts") pod "02c1be2a-b38f-480c-83a8-2225cd2ed80a" (UID: "02c1be2a-b38f-480c-83a8-2225cd2ed80a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:04:57 crc kubenswrapper[4766]: I1213 04:04:57.883768 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02c1be2a-b38f-480c-83a8-2225cd2ed80a-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "02c1be2a-b38f-480c-83a8-2225cd2ed80a" (UID: "02c1be2a-b38f-480c-83a8-2225cd2ed80a"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:04:57 crc kubenswrapper[4766]: I1213 04:04:57.916684 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02c1be2a-b38f-480c-83a8-2225cd2ed80a-config-data" (OuterVolumeSpecName: "config-data") pod "02c1be2a-b38f-480c-83a8-2225cd2ed80a" (UID: "02c1be2a-b38f-480c-83a8-2225cd2ed80a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:04:57 crc kubenswrapper[4766]: I1213 04:04:57.955827 4766 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/02c1be2a-b38f-480c-83a8-2225cd2ed80a-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 13 04:04:57 crc kubenswrapper[4766]: I1213 04:04:57.955879 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02c1be2a-b38f-480c-83a8-2225cd2ed80a-config-data\") on node \"crc\" DevicePath \"\"" Dec 13 04:04:57 crc kubenswrapper[4766]: I1213 04:04:57.955893 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdx6v\" (UniqueName: \"kubernetes.io/projected/02c1be2a-b38f-480c-83a8-2225cd2ed80a-kube-api-access-kdx6v\") on node \"crc\" DevicePath \"\"" Dec 13 04:04:57 crc kubenswrapper[4766]: I1213 04:04:57.955905 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02c1be2a-b38f-480c-83a8-2225cd2ed80a-scripts\") on node \"crc\" DevicePath \"\"" Dec 13 04:04:57 crc kubenswrapper[4766]: I1213 04:04:57.955914 4766 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/02c1be2a-b38f-480c-83a8-2225cd2ed80a-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 13 04:04:58 crc kubenswrapper[4766]: I1213 04:04:58.290657 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-bootstrap-jghjb" event={"ID":"02c1be2a-b38f-480c-83a8-2225cd2ed80a","Type":"ContainerDied","Data":"6ff5bf7be4abeec0c4e73da37ef7cea156afa6b16e83f2065da50e4392fc0fae"} Dec 13 04:04:58 crc kubenswrapper[4766]: I1213 04:04:58.290725 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ff5bf7be4abeec0c4e73da37ef7cea156afa6b16e83f2065da50e4392fc0fae" Dec 13 04:04:58 crc kubenswrapper[4766]: I1213 04:04:58.290752 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-bootstrap-jghjb" Dec 13 04:04:58 crc kubenswrapper[4766]: I1213 04:04:58.404125 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystone-5cf4ff88f8-pvnzl"] Dec 13 04:04:58 crc kubenswrapper[4766]: E1213 04:04:58.405273 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c1be2a-b38f-480c-83a8-2225cd2ed80a" containerName="keystone-bootstrap" Dec 13 04:04:58 crc kubenswrapper[4766]: I1213 04:04:58.405299 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c1be2a-b38f-480c-83a8-2225cd2ed80a" containerName="keystone-bootstrap" Dec 13 04:04:58 crc kubenswrapper[4766]: I1213 04:04:58.405501 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="02c1be2a-b38f-480c-83a8-2225cd2ed80a" containerName="keystone-bootstrap" Dec 13 04:04:58 crc kubenswrapper[4766]: I1213 04:04:58.406480 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-5cf4ff88f8-pvnzl" Dec 13 04:04:58 crc kubenswrapper[4766]: I1213 04:04:58.410268 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-keystone-dockercfg-5m4ff" Dec 13 04:04:58 crc kubenswrapper[4766]: I1213 04:04:58.410584 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-scripts" Dec 13 04:04:58 crc kubenswrapper[4766]: I1213 04:04:58.410889 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone" Dec 13 04:04:58 crc kubenswrapper[4766]: I1213 04:04:58.413921 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-5cf4ff88f8-pvnzl"] Dec 13 04:04:58 crc kubenswrapper[4766]: I1213 04:04:58.415711 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-config-data" Dec 13 04:04:58 crc kubenswrapper[4766]: E1213 04:04:58.509873 4766 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02c1be2a_b38f_480c_83a8_2225cd2ed80a.slice/crio-6ff5bf7be4abeec0c4e73da37ef7cea156afa6b16e83f2065da50e4392fc0fae\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02c1be2a_b38f_480c_83a8_2225cd2ed80a.slice\": RecentStats: unable to find data in memory cache]" Dec 13 04:04:58 crc kubenswrapper[4766]: I1213 04:04:58.521872 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b8808f5-1647-40bb-a071-0592681524fb-scripts\") pod \"keystone-5cf4ff88f8-pvnzl\" (UID: \"1b8808f5-1647-40bb-a071-0592681524fb\") " pod="glance-kuttl-tests/keystone-5cf4ff88f8-pvnzl" Dec 13 04:04:58 crc kubenswrapper[4766]: I1213 04:04:58.521915 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1b8808f5-1647-40bb-a071-0592681524fb-credential-keys\") pod \"keystone-5cf4ff88f8-pvnzl\" (UID: \"1b8808f5-1647-40bb-a071-0592681524fb\") " pod="glance-kuttl-tests/keystone-5cf4ff88f8-pvnzl" Dec 13 04:04:58 crc kubenswrapper[4766]: I1213 04:04:58.521951 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b8808f5-1647-40bb-a071-0592681524fb-config-data\") pod \"keystone-5cf4ff88f8-pvnzl\" (UID: \"1b8808f5-1647-40bb-a071-0592681524fb\") " pod="glance-kuttl-tests/keystone-5cf4ff88f8-pvnzl" Dec 13 04:04:58 crc kubenswrapper[4766]: I1213 04:04:58.521980 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrhln\" (UniqueName: \"kubernetes.io/projected/1b8808f5-1647-40bb-a071-0592681524fb-kube-api-access-lrhln\") pod \"keystone-5cf4ff88f8-pvnzl\" (UID: \"1b8808f5-1647-40bb-a071-0592681524fb\") " pod="glance-kuttl-tests/keystone-5cf4ff88f8-pvnzl" Dec 13 04:04:58 crc kubenswrapper[4766]: I1213 04:04:58.522059 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1b8808f5-1647-40bb-a071-0592681524fb-fernet-keys\") pod \"keystone-5cf4ff88f8-pvnzl\" (UID: \"1b8808f5-1647-40bb-a071-0592681524fb\") " pod="glance-kuttl-tests/keystone-5cf4ff88f8-pvnzl" Dec 13 04:04:58 crc kubenswrapper[4766]: I1213 04:04:58.623037 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b8808f5-1647-40bb-a071-0592681524fb-scripts\") pod \"keystone-5cf4ff88f8-pvnzl\" (UID: \"1b8808f5-1647-40bb-a071-0592681524fb\") " pod="glance-kuttl-tests/keystone-5cf4ff88f8-pvnzl" Dec 13 04:04:58 crc kubenswrapper[4766]: I1213 04:04:58.623091 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1b8808f5-1647-40bb-a071-0592681524fb-credential-keys\") pod \"keystone-5cf4ff88f8-pvnzl\" (UID: \"1b8808f5-1647-40bb-a071-0592681524fb\") " pod="glance-kuttl-tests/keystone-5cf4ff88f8-pvnzl" Dec 13 04:04:58 crc kubenswrapper[4766]: I1213 04:04:58.623119 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b8808f5-1647-40bb-a071-0592681524fb-config-data\") pod \"keystone-5cf4ff88f8-pvnzl\" (UID: \"1b8808f5-1647-40bb-a071-0592681524fb\") " pod="glance-kuttl-tests/keystone-5cf4ff88f8-pvnzl" Dec 13 04:04:58 crc kubenswrapper[4766]: I1213 04:04:58.623152 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrhln\" (UniqueName: \"kubernetes.io/projected/1b8808f5-1647-40bb-a071-0592681524fb-kube-api-access-lrhln\") pod \"keystone-5cf4ff88f8-pvnzl\" (UID: \"1b8808f5-1647-40bb-a071-0592681524fb\") " pod="glance-kuttl-tests/keystone-5cf4ff88f8-pvnzl" Dec 13 04:04:58 crc kubenswrapper[4766]: I1213 04:04:58.623210 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1b8808f5-1647-40bb-a071-0592681524fb-fernet-keys\") pod \"keystone-5cf4ff88f8-pvnzl\" (UID: \"1b8808f5-1647-40bb-a071-0592681524fb\") " pod="glance-kuttl-tests/keystone-5cf4ff88f8-pvnzl" Dec 13 04:04:58 crc kubenswrapper[4766]: I1213 04:04:58.628039 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1b8808f5-1647-40bb-a071-0592681524fb-fernet-keys\") pod \"keystone-5cf4ff88f8-pvnzl\" (UID: \"1b8808f5-1647-40bb-a071-0592681524fb\") " pod="glance-kuttl-tests/keystone-5cf4ff88f8-pvnzl" Dec 13 04:04:58 crc kubenswrapper[4766]: I1213 04:04:58.629220 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1b8808f5-1647-40bb-a071-0592681524fb-credential-keys\") pod \"keystone-5cf4ff88f8-pvnzl\" (UID: \"1b8808f5-1647-40bb-a071-0592681524fb\") " pod="glance-kuttl-tests/keystone-5cf4ff88f8-pvnzl" Dec 13 04:04:58 crc kubenswrapper[4766]: I1213 04:04:58.630221 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b8808f5-1647-40bb-a071-0592681524fb-config-data\") pod \"keystone-5cf4ff88f8-pvnzl\" (UID: \"1b8808f5-1647-40bb-a071-0592681524fb\") " pod="glance-kuttl-tests/keystone-5cf4ff88f8-pvnzl" Dec 13 04:04:58 crc kubenswrapper[4766]: I1213 04:04:58.639822 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b8808f5-1647-40bb-a071-0592681524fb-scripts\") pod \"keystone-5cf4ff88f8-pvnzl\" (UID: \"1b8808f5-1647-40bb-a071-0592681524fb\") " pod="glance-kuttl-tests/keystone-5cf4ff88f8-pvnzl" Dec 13 04:04:58 crc kubenswrapper[4766]: I1213 04:04:58.643936 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrhln\" (UniqueName: \"kubernetes.io/projected/1b8808f5-1647-40bb-a071-0592681524fb-kube-api-access-lrhln\") pod \"keystone-5cf4ff88f8-pvnzl\" (UID: \"1b8808f5-1647-40bb-a071-0592681524fb\") " pod="glance-kuttl-tests/keystone-5cf4ff88f8-pvnzl" Dec 13 04:04:58 crc kubenswrapper[4766]: I1213 04:04:58.739803 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-5cf4ff88f8-pvnzl" Dec 13 04:04:59 crc kubenswrapper[4766]: I1213 04:04:59.244170 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-5cf4ff88f8-pvnzl"] Dec 13 04:04:59 crc kubenswrapper[4766]: I1213 04:04:59.299644 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-5cf4ff88f8-pvnzl" event={"ID":"1b8808f5-1647-40bb-a071-0592681524fb","Type":"ContainerStarted","Data":"ec2853866d42f197b5d5e892fad917dc2434d22fe0806cb244b83f98cf7bf962"} Dec 13 04:05:00 crc kubenswrapper[4766]: I1213 04:05:00.313825 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-5cf4ff88f8-pvnzl" event={"ID":"1b8808f5-1647-40bb-a071-0592681524fb","Type":"ContainerStarted","Data":"1424db05733c0d9e7f45c0f78870370e922f2c49247b8a1977aa3356c3694439"} Dec 13 04:05:00 crc kubenswrapper[4766]: I1213 04:05:00.314190 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/keystone-5cf4ff88f8-pvnzl" Dec 13 04:05:00 crc kubenswrapper[4766]: I1213 04:05:00.344982 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/keystone-5cf4ff88f8-pvnzl" podStartSLOduration=2.344955006 podStartE2EDuration="2.344955006s" podCreationTimestamp="2025-12-13 04:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 04:05:00.333544735 +0000 UTC m=+1231.843477699" watchObservedRunningTime="2025-12-13 04:05:00.344955006 +0000 UTC m=+1231.854887970" Dec 13 04:05:02 crc kubenswrapper[4766]: I1213 04:05:02.333416 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-f7949f797-6ctkv" event={"ID":"7ed09af0-6061-4c7d-9c33-82e4c82a24ac","Type":"ContainerStarted","Data":"7608ad7c2e5af4b202dbfd2194512fd24dde0aa3bf95042037155faa9e61e775"} Dec 13 04:05:02 crc kubenswrapper[4766]: I1213 04:05:02.334328 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-f7949f797-6ctkv" Dec 13 04:05:02 crc kubenswrapper[4766]: I1213 04:05:02.334347 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-f7949f797-6ctkv" event={"ID":"7ed09af0-6061-4c7d-9c33-82e4c82a24ac","Type":"ContainerStarted","Data":"e71f075bf6033443465f82c14089fc1f29313345819f3aec1d7040d330251e6d"} Dec 13 04:05:02 crc kubenswrapper[4766]: I1213 04:05:02.359538 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-f7949f797-6ctkv" podStartSLOduration=2.862128875 podStartE2EDuration="8.359509507s" podCreationTimestamp="2025-12-13 04:04:54 +0000 UTC" firstStartedPulling="2025-12-13 04:04:56.230283066 +0000 UTC m=+1227.740216030" lastFinishedPulling="2025-12-13 04:05:01.727663698 +0000 UTC m=+1233.237596662" observedRunningTime="2025-12-13 04:05:02.353891014 +0000 UTC m=+1233.863823978" watchObservedRunningTime="2025-12-13 04:05:02.359509507 +0000 UTC m=+1233.869442471" Dec 13 04:05:03 crc kubenswrapper[4766]: I1213 04:05:03.892163 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5c69cff4d6-rrx82"] Dec 13 04:05:03 crc kubenswrapper[4766]: I1213 04:05:03.893732 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5c69cff4d6-rrx82" Dec 13 04:05:03 crc kubenswrapper[4766]: I1213 04:05:03.895592 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-service-cert" Dec 13 04:05:03 crc kubenswrapper[4766]: I1213 04:05:03.896200 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-2n89r" Dec 13 04:05:03 crc kubenswrapper[4766]: I1213 04:05:03.913888 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5c69cff4d6-rrx82"] Dec 13 04:05:04 crc kubenswrapper[4766]: I1213 04:05:04.068481 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbcfg\" (UniqueName: \"kubernetes.io/projected/da849bf2-7205-4eb8-b526-37115b6632de-kube-api-access-bbcfg\") pod \"swift-operator-controller-manager-5c69cff4d6-rrx82\" (UID: \"da849bf2-7205-4eb8-b526-37115b6632de\") " pod="openstack-operators/swift-operator-controller-manager-5c69cff4d6-rrx82" Dec 13 04:05:04 crc kubenswrapper[4766]: I1213 04:05:04.068568 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/da849bf2-7205-4eb8-b526-37115b6632de-apiservice-cert\") pod \"swift-operator-controller-manager-5c69cff4d6-rrx82\" (UID: \"da849bf2-7205-4eb8-b526-37115b6632de\") " pod="openstack-operators/swift-operator-controller-manager-5c69cff4d6-rrx82" Dec 13 04:05:04 crc kubenswrapper[4766]: I1213 04:05:04.068593 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da849bf2-7205-4eb8-b526-37115b6632de-webhook-cert\") pod \"swift-operator-controller-manager-5c69cff4d6-rrx82\" (UID: \"da849bf2-7205-4eb8-b526-37115b6632de\") " pod="openstack-operators/swift-operator-controller-manager-5c69cff4d6-rrx82" Dec 13 04:05:04 crc kubenswrapper[4766]: I1213 04:05:04.170840 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbcfg\" (UniqueName: \"kubernetes.io/projected/da849bf2-7205-4eb8-b526-37115b6632de-kube-api-access-bbcfg\") pod \"swift-operator-controller-manager-5c69cff4d6-rrx82\" (UID: \"da849bf2-7205-4eb8-b526-37115b6632de\") " pod="openstack-operators/swift-operator-controller-manager-5c69cff4d6-rrx82" Dec 13 04:05:04 crc kubenswrapper[4766]: I1213 04:05:04.170919 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/da849bf2-7205-4eb8-b526-37115b6632de-apiservice-cert\") pod \"swift-operator-controller-manager-5c69cff4d6-rrx82\" (UID: \"da849bf2-7205-4eb8-b526-37115b6632de\") " pod="openstack-operators/swift-operator-controller-manager-5c69cff4d6-rrx82" Dec 13 04:05:04 crc kubenswrapper[4766]: I1213 04:05:04.170949 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da849bf2-7205-4eb8-b526-37115b6632de-webhook-cert\") pod \"swift-operator-controller-manager-5c69cff4d6-rrx82\" (UID: \"da849bf2-7205-4eb8-b526-37115b6632de\") " pod="openstack-operators/swift-operator-controller-manager-5c69cff4d6-rrx82" Dec 13 04:05:04 crc kubenswrapper[4766]: I1213 04:05:04.177673 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/da849bf2-7205-4eb8-b526-37115b6632de-apiservice-cert\") pod \"swift-operator-controller-manager-5c69cff4d6-rrx82\" (UID: \"da849bf2-7205-4eb8-b526-37115b6632de\") " pod="openstack-operators/swift-operator-controller-manager-5c69cff4d6-rrx82" Dec 13 04:05:04 crc kubenswrapper[4766]: I1213 04:05:04.185263 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da849bf2-7205-4eb8-b526-37115b6632de-webhook-cert\") pod \"swift-operator-controller-manager-5c69cff4d6-rrx82\" (UID: \"da849bf2-7205-4eb8-b526-37115b6632de\") " pod="openstack-operators/swift-operator-controller-manager-5c69cff4d6-rrx82" Dec 13 04:05:04 crc kubenswrapper[4766]: I1213 04:05:04.194593 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbcfg\" (UniqueName: \"kubernetes.io/projected/da849bf2-7205-4eb8-b526-37115b6632de-kube-api-access-bbcfg\") pod \"swift-operator-controller-manager-5c69cff4d6-rrx82\" (UID: \"da849bf2-7205-4eb8-b526-37115b6632de\") " pod="openstack-operators/swift-operator-controller-manager-5c69cff4d6-rrx82" Dec 13 04:05:04 crc kubenswrapper[4766]: I1213 04:05:04.215361 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5c69cff4d6-rrx82" Dec 13 04:05:04 crc kubenswrapper[4766]: I1213 04:05:04.684486 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5c69cff4d6-rrx82"] Dec 13 04:05:05 crc kubenswrapper[4766]: I1213 04:05:05.358599 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5c69cff4d6-rrx82" event={"ID":"da849bf2-7205-4eb8-b526-37115b6632de","Type":"ContainerStarted","Data":"9fc6b4a44ef47356e45ea0b13b5bcc9776f24f4266f66abd92f913ed39119863"} Dec 13 04:05:08 crc kubenswrapper[4766]: I1213 04:05:08.442937 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5c69cff4d6-rrx82" event={"ID":"da849bf2-7205-4eb8-b526-37115b6632de","Type":"ContainerStarted","Data":"8713ebdbd2562835c229c3db61c3693d6db461b35c20952f0865ae0afd0ff8f6"} Dec 13 04:05:08 crc kubenswrapper[4766]: I1213 04:05:08.445310 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5c69cff4d6-rrx82" Dec 13 04:05:08 crc kubenswrapper[4766]: I1213 04:05:08.445391 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5c69cff4d6-rrx82" event={"ID":"da849bf2-7205-4eb8-b526-37115b6632de","Type":"ContainerStarted","Data":"d159dc91fc884383e5a2284387af91129679ee83c786c78c2061cf38431fc7da"} Dec 13 04:05:09 crc kubenswrapper[4766]: I1213 04:05:09.732824 4766 patch_prober.go:28] interesting pod/machine-config-daemon-94w9l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 04:05:09 crc kubenswrapper[4766]: I1213 04:05:09.733603 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 04:05:14 crc kubenswrapper[4766]: I1213 04:05:14.220829 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5c69cff4d6-rrx82" Dec 13 04:05:14 crc kubenswrapper[4766]: I1213 04:05:14.237979 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5c69cff4d6-rrx82" podStartSLOduration=8.399745114 podStartE2EDuration="11.237960314s" podCreationTimestamp="2025-12-13 04:05:03 +0000 UTC" firstStartedPulling="2025-12-13 04:05:04.692691092 +0000 UTC m=+1236.202624056" lastFinishedPulling="2025-12-13 04:05:07.530906292 +0000 UTC m=+1239.040839256" observedRunningTime="2025-12-13 04:05:08.477908266 +0000 UTC m=+1239.987841240" watchObservedRunningTime="2025-12-13 04:05:14.237960314 +0000 UTC m=+1245.747893278" Dec 13 04:05:15 crc kubenswrapper[4766]: I1213 04:05:15.263042 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-f7949f797-6ctkv" Dec 13 04:05:17 crc kubenswrapper[4766]: I1213 04:05:17.654878 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/swift-storage-0"] Dec 13 04:05:17 crc kubenswrapper[4766]: I1213 04:05:17.660244 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-storage-0" Dec 13 04:05:17 crc kubenswrapper[4766]: I1213 04:05:17.665678 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"swift-storage-config-data" Dec 13 04:05:17 crc kubenswrapper[4766]: I1213 04:05:17.666297 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"swift-conf" Dec 13 04:05:17 crc kubenswrapper[4766]: I1213 04:05:17.666841 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"swift-swift-dockercfg-4xggd" Dec 13 04:05:17 crc kubenswrapper[4766]: I1213 04:05:17.667116 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"swift-ring-files" Dec 13 04:05:17 crc kubenswrapper[4766]: I1213 04:05:17.687247 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-storage-0"] Dec 13 04:05:17 crc kubenswrapper[4766]: I1213 04:05:17.850899 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wfbx\" (UniqueName: \"kubernetes.io/projected/9d937b76-c14b-462b-8de2-fecf78a9d3cf-kube-api-access-6wfbx\") pod \"swift-storage-0\" (UID: \"9d937b76-c14b-462b-8de2-fecf78a9d3cf\") " pod="glance-kuttl-tests/swift-storage-0" Dec 13 04:05:17 crc kubenswrapper[4766]: I1213 04:05:17.850967 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9d937b76-c14b-462b-8de2-fecf78a9d3cf-cache\") pod \"swift-storage-0\" (UID: \"9d937b76-c14b-462b-8de2-fecf78a9d3cf\") " pod="glance-kuttl-tests/swift-storage-0" Dec 13 04:05:17 crc kubenswrapper[4766]: I1213 04:05:17.851006 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9d937b76-c14b-462b-8de2-fecf78a9d3cf-etc-swift\") pod \"swift-storage-0\" (UID: \"9d937b76-c14b-462b-8de2-fecf78a9d3cf\") " pod="glance-kuttl-tests/swift-storage-0" Dec 13 04:05:17 crc kubenswrapper[4766]: I1213 04:05:17.851148 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"9d937b76-c14b-462b-8de2-fecf78a9d3cf\") " pod="glance-kuttl-tests/swift-storage-0" Dec 13 04:05:17 crc kubenswrapper[4766]: I1213 04:05:17.851232 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9d937b76-c14b-462b-8de2-fecf78a9d3cf-lock\") pod \"swift-storage-0\" (UID: \"9d937b76-c14b-462b-8de2-fecf78a9d3cf\") " pod="glance-kuttl-tests/swift-storage-0" Dec 13 04:05:17 crc kubenswrapper[4766]: I1213 04:05:17.953632 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"9d937b76-c14b-462b-8de2-fecf78a9d3cf\") " pod="glance-kuttl-tests/swift-storage-0" Dec 13 04:05:17 crc kubenswrapper[4766]: I1213 04:05:17.953746 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9d937b76-c14b-462b-8de2-fecf78a9d3cf-lock\") pod \"swift-storage-0\" (UID: \"9d937b76-c14b-462b-8de2-fecf78a9d3cf\") " pod="glance-kuttl-tests/swift-storage-0" Dec 13 04:05:17 crc kubenswrapper[4766]: I1213 04:05:17.953793 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wfbx\" (UniqueName: \"kubernetes.io/projected/9d937b76-c14b-462b-8de2-fecf78a9d3cf-kube-api-access-6wfbx\") pod \"swift-storage-0\" (UID: \"9d937b76-c14b-462b-8de2-fecf78a9d3cf\") " pod="glance-kuttl-tests/swift-storage-0" Dec 13 04:05:17 crc kubenswrapper[4766]: I1213 04:05:17.953859 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9d937b76-c14b-462b-8de2-fecf78a9d3cf-cache\") pod \"swift-storage-0\" (UID: \"9d937b76-c14b-462b-8de2-fecf78a9d3cf\") " pod="glance-kuttl-tests/swift-storage-0" Dec 13 04:05:17 crc kubenswrapper[4766]: I1213 04:05:17.953898 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9d937b76-c14b-462b-8de2-fecf78a9d3cf-etc-swift\") pod \"swift-storage-0\" (UID: \"9d937b76-c14b-462b-8de2-fecf78a9d3cf\") " pod="glance-kuttl-tests/swift-storage-0" Dec 13 04:05:17 crc kubenswrapper[4766]: E1213 04:05:17.954142 4766 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Dec 13 04:05:17 crc kubenswrapper[4766]: I1213 04:05:17.954158 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"9d937b76-c14b-462b-8de2-fecf78a9d3cf\") device mount path \"/mnt/openstack/pv10\"" pod="glance-kuttl-tests/swift-storage-0" Dec 13 04:05:17 crc kubenswrapper[4766]: E1213 04:05:17.954170 4766 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Dec 13 04:05:17 crc kubenswrapper[4766]: E1213 04:05:17.954462 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d937b76-c14b-462b-8de2-fecf78a9d3cf-etc-swift podName:9d937b76-c14b-462b-8de2-fecf78a9d3cf nodeName:}" failed. No retries permitted until 2025-12-13 04:05:18.454381393 +0000 UTC m=+1249.964314357 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9d937b76-c14b-462b-8de2-fecf78a9d3cf-etc-swift") pod "swift-storage-0" (UID: "9d937b76-c14b-462b-8de2-fecf78a9d3cf") : configmap "swift-ring-files" not found Dec 13 04:05:17 crc kubenswrapper[4766]: I1213 04:05:17.954835 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9d937b76-c14b-462b-8de2-fecf78a9d3cf-lock\") pod \"swift-storage-0\" (UID: \"9d937b76-c14b-462b-8de2-fecf78a9d3cf\") " pod="glance-kuttl-tests/swift-storage-0" Dec 13 04:05:17 crc kubenswrapper[4766]: I1213 04:05:17.955447 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9d937b76-c14b-462b-8de2-fecf78a9d3cf-cache\") pod \"swift-storage-0\" (UID: \"9d937b76-c14b-462b-8de2-fecf78a9d3cf\") " pod="glance-kuttl-tests/swift-storage-0" Dec 13 04:05:17 crc kubenswrapper[4766]: I1213 04:05:17.974871 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wfbx\" (UniqueName: \"kubernetes.io/projected/9d937b76-c14b-462b-8de2-fecf78a9d3cf-kube-api-access-6wfbx\") pod \"swift-storage-0\" (UID: \"9d937b76-c14b-462b-8de2-fecf78a9d3cf\") " pod="glance-kuttl-tests/swift-storage-0" Dec 13 04:05:17 crc kubenswrapper[4766]: I1213 04:05:17.977141 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"9d937b76-c14b-462b-8de2-fecf78a9d3cf\") " pod="glance-kuttl-tests/swift-storage-0" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.250220 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-d5scc"] Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.251515 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-d5scc" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.253992 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"swift-proxy-config-data" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.254089 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"swift-ring-scripts" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.256985 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"swift-ring-config-data" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.266299 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-d5scc"] Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.285956 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-d5scc"] Dec 13 04:05:18 crc kubenswrapper[4766]: E1213 04:05:18.287161 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[dispersionconf etc-swift kube-api-access-7qgqz ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="glance-kuttl-tests/swift-ring-rebalance-d5scc" podUID="256e0580-ba89-475a-a509-e6995259beb7" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.354810 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-8hn4p"] Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.356190 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-8hn4p" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.362742 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qgqz\" (UniqueName: \"kubernetes.io/projected/256e0580-ba89-475a-a509-e6995259beb7-kube-api-access-7qgqz\") pod \"swift-ring-rebalance-d5scc\" (UID: \"256e0580-ba89-475a-a509-e6995259beb7\") " pod="glance-kuttl-tests/swift-ring-rebalance-d5scc" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.362832 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/256e0580-ba89-475a-a509-e6995259beb7-ring-data-devices\") pod \"swift-ring-rebalance-d5scc\" (UID: \"256e0580-ba89-475a-a509-e6995259beb7\") " pod="glance-kuttl-tests/swift-ring-rebalance-d5scc" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.362933 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/256e0580-ba89-475a-a509-e6995259beb7-swiftconf\") pod \"swift-ring-rebalance-d5scc\" (UID: \"256e0580-ba89-475a-a509-e6995259beb7\") " pod="glance-kuttl-tests/swift-ring-rebalance-d5scc" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.362985 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/256e0580-ba89-475a-a509-e6995259beb7-dispersionconf\") pod \"swift-ring-rebalance-d5scc\" (UID: \"256e0580-ba89-475a-a509-e6995259beb7\") " pod="glance-kuttl-tests/swift-ring-rebalance-d5scc" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.363077 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/256e0580-ba89-475a-a509-e6995259beb7-scripts\") pod \"swift-ring-rebalance-d5scc\" (UID: \"256e0580-ba89-475a-a509-e6995259beb7\") " pod="glance-kuttl-tests/swift-ring-rebalance-d5scc" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.363138 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/256e0580-ba89-475a-a509-e6995259beb7-etc-swift\") pod \"swift-ring-rebalance-d5scc\" (UID: \"256e0580-ba89-475a-a509-e6995259beb7\") " pod="glance-kuttl-tests/swift-ring-rebalance-d5scc" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.363302 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-8hn4p"] Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.464598 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/375c2b15-d870-4c02-bb26-f7deac6a4e81-scripts\") pod \"swift-ring-rebalance-8hn4p\" (UID: \"375c2b15-d870-4c02-bb26-f7deac6a4e81\") " pod="glance-kuttl-tests/swift-ring-rebalance-8hn4p" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.464972 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/256e0580-ba89-475a-a509-e6995259beb7-swiftconf\") pod \"swift-ring-rebalance-d5scc\" (UID: \"256e0580-ba89-475a-a509-e6995259beb7\") " pod="glance-kuttl-tests/swift-ring-rebalance-d5scc" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.465166 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/256e0580-ba89-475a-a509-e6995259beb7-dispersionconf\") pod \"swift-ring-rebalance-d5scc\" (UID: \"256e0580-ba89-475a-a509-e6995259beb7\") " pod="glance-kuttl-tests/swift-ring-rebalance-d5scc" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.465276 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/375c2b15-d870-4c02-bb26-f7deac6a4e81-etc-swift\") pod \"swift-ring-rebalance-8hn4p\" (UID: \"375c2b15-d870-4c02-bb26-f7deac6a4e81\") " pod="glance-kuttl-tests/swift-ring-rebalance-8hn4p" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.465382 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxx8c\" (UniqueName: \"kubernetes.io/projected/375c2b15-d870-4c02-bb26-f7deac6a4e81-kube-api-access-xxx8c\") pod \"swift-ring-rebalance-8hn4p\" (UID: \"375c2b15-d870-4c02-bb26-f7deac6a4e81\") " pod="glance-kuttl-tests/swift-ring-rebalance-8hn4p" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.465516 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/375c2b15-d870-4c02-bb26-f7deac6a4e81-ring-data-devices\") pod \"swift-ring-rebalance-8hn4p\" (UID: \"375c2b15-d870-4c02-bb26-f7deac6a4e81\") " pod="glance-kuttl-tests/swift-ring-rebalance-8hn4p" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.465687 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/375c2b15-d870-4c02-bb26-f7deac6a4e81-swiftconf\") pod \"swift-ring-rebalance-8hn4p\" (UID: \"375c2b15-d870-4c02-bb26-f7deac6a4e81\") " pod="glance-kuttl-tests/swift-ring-rebalance-8hn4p" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.465807 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/256e0580-ba89-475a-a509-e6995259beb7-scripts\") pod \"swift-ring-rebalance-d5scc\" (UID: \"256e0580-ba89-475a-a509-e6995259beb7\") " pod="glance-kuttl-tests/swift-ring-rebalance-d5scc" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.465930 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/375c2b15-d870-4c02-bb26-f7deac6a4e81-dispersionconf\") pod \"swift-ring-rebalance-8hn4p\" (UID: \"375c2b15-d870-4c02-bb26-f7deac6a4e81\") " pod="glance-kuttl-tests/swift-ring-rebalance-8hn4p" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.466081 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/256e0580-ba89-475a-a509-e6995259beb7-etc-swift\") pod \"swift-ring-rebalance-d5scc\" (UID: \"256e0580-ba89-475a-a509-e6995259beb7\") " pod="glance-kuttl-tests/swift-ring-rebalance-d5scc" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.466318 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9d937b76-c14b-462b-8de2-fecf78a9d3cf-etc-swift\") pod \"swift-storage-0\" (UID: \"9d937b76-c14b-462b-8de2-fecf78a9d3cf\") " pod="glance-kuttl-tests/swift-storage-0" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.466964 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/256e0580-ba89-475a-a509-e6995259beb7-etc-swift\") pod \"swift-ring-rebalance-d5scc\" (UID: \"256e0580-ba89-475a-a509-e6995259beb7\") " pod="glance-kuttl-tests/swift-ring-rebalance-d5scc" Dec 13 04:05:18 crc kubenswrapper[4766]: E1213 04:05:18.466536 4766 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Dec 13 04:05:18 crc kubenswrapper[4766]: E1213 04:05:18.467251 4766 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Dec 13 04:05:18 crc kubenswrapper[4766]: E1213 04:05:18.467409 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d937b76-c14b-462b-8de2-fecf78a9d3cf-etc-swift podName:9d937b76-c14b-462b-8de2-fecf78a9d3cf nodeName:}" failed. No retries permitted until 2025-12-13 04:05:19.46738791 +0000 UTC m=+1250.977320884 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9d937b76-c14b-462b-8de2-fecf78a9d3cf-etc-swift") pod "swift-storage-0" (UID: "9d937b76-c14b-462b-8de2-fecf78a9d3cf") : configmap "swift-ring-files" not found Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.467554 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/256e0580-ba89-475a-a509-e6995259beb7-scripts\") pod \"swift-ring-rebalance-d5scc\" (UID: \"256e0580-ba89-475a-a509-e6995259beb7\") " pod="glance-kuttl-tests/swift-ring-rebalance-d5scc" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.468053 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qgqz\" (UniqueName: \"kubernetes.io/projected/256e0580-ba89-475a-a509-e6995259beb7-kube-api-access-7qgqz\") pod \"swift-ring-rebalance-d5scc\" (UID: \"256e0580-ba89-475a-a509-e6995259beb7\") " pod="glance-kuttl-tests/swift-ring-rebalance-d5scc" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.468179 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/256e0580-ba89-475a-a509-e6995259beb7-ring-data-devices\") pod \"swift-ring-rebalance-d5scc\" (UID: \"256e0580-ba89-475a-a509-e6995259beb7\") " pod="glance-kuttl-tests/swift-ring-rebalance-d5scc" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.469147 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/256e0580-ba89-475a-a509-e6995259beb7-ring-data-devices\") pod \"swift-ring-rebalance-d5scc\" (UID: \"256e0580-ba89-475a-a509-e6995259beb7\") " pod="glance-kuttl-tests/swift-ring-rebalance-d5scc" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.470631 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/256e0580-ba89-475a-a509-e6995259beb7-swiftconf\") pod \"swift-ring-rebalance-d5scc\" (UID: \"256e0580-ba89-475a-a509-e6995259beb7\") " pod="glance-kuttl-tests/swift-ring-rebalance-d5scc" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.476617 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/256e0580-ba89-475a-a509-e6995259beb7-dispersionconf\") pod \"swift-ring-rebalance-d5scc\" (UID: \"256e0580-ba89-475a-a509-e6995259beb7\") " pod="glance-kuttl-tests/swift-ring-rebalance-d5scc" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.489239 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qgqz\" (UniqueName: \"kubernetes.io/projected/256e0580-ba89-475a-a509-e6995259beb7-kube-api-access-7qgqz\") pod \"swift-ring-rebalance-d5scc\" (UID: \"256e0580-ba89-475a-a509-e6995259beb7\") " pod="glance-kuttl-tests/swift-ring-rebalance-d5scc" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.509630 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-d5scc" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.527119 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-d5scc" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.570231 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/375c2b15-d870-4c02-bb26-f7deac6a4e81-scripts\") pod \"swift-ring-rebalance-8hn4p\" (UID: \"375c2b15-d870-4c02-bb26-f7deac6a4e81\") " pod="glance-kuttl-tests/swift-ring-rebalance-8hn4p" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.570348 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/375c2b15-d870-4c02-bb26-f7deac6a4e81-etc-swift\") pod \"swift-ring-rebalance-8hn4p\" (UID: \"375c2b15-d870-4c02-bb26-f7deac6a4e81\") " pod="glance-kuttl-tests/swift-ring-rebalance-8hn4p" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.570372 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/375c2b15-d870-4c02-bb26-f7deac6a4e81-ring-data-devices\") pod \"swift-ring-rebalance-8hn4p\" (UID: \"375c2b15-d870-4c02-bb26-f7deac6a4e81\") " pod="glance-kuttl-tests/swift-ring-rebalance-8hn4p" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.570394 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxx8c\" (UniqueName: \"kubernetes.io/projected/375c2b15-d870-4c02-bb26-f7deac6a4e81-kube-api-access-xxx8c\") pod \"swift-ring-rebalance-8hn4p\" (UID: \"375c2b15-d870-4c02-bb26-f7deac6a4e81\") " pod="glance-kuttl-tests/swift-ring-rebalance-8hn4p" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.570461 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/375c2b15-d870-4c02-bb26-f7deac6a4e81-swiftconf\") pod \"swift-ring-rebalance-8hn4p\" (UID: \"375c2b15-d870-4c02-bb26-f7deac6a4e81\") " pod="glance-kuttl-tests/swift-ring-rebalance-8hn4p" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.570501 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/375c2b15-d870-4c02-bb26-f7deac6a4e81-dispersionconf\") pod \"swift-ring-rebalance-8hn4p\" (UID: \"375c2b15-d870-4c02-bb26-f7deac6a4e81\") " pod="glance-kuttl-tests/swift-ring-rebalance-8hn4p" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.571468 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/375c2b15-d870-4c02-bb26-f7deac6a4e81-etc-swift\") pod \"swift-ring-rebalance-8hn4p\" (UID: \"375c2b15-d870-4c02-bb26-f7deac6a4e81\") " pod="glance-kuttl-tests/swift-ring-rebalance-8hn4p" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.571697 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/375c2b15-d870-4c02-bb26-f7deac6a4e81-scripts\") pod \"swift-ring-rebalance-8hn4p\" (UID: \"375c2b15-d870-4c02-bb26-f7deac6a4e81\") " pod="glance-kuttl-tests/swift-ring-rebalance-8hn4p" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.571844 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/375c2b15-d870-4c02-bb26-f7deac6a4e81-ring-data-devices\") pod \"swift-ring-rebalance-8hn4p\" (UID: \"375c2b15-d870-4c02-bb26-f7deac6a4e81\") " pod="glance-kuttl-tests/swift-ring-rebalance-8hn4p" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.575520 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/375c2b15-d870-4c02-bb26-f7deac6a4e81-swiftconf\") pod \"swift-ring-rebalance-8hn4p\" (UID: \"375c2b15-d870-4c02-bb26-f7deac6a4e81\") " pod="glance-kuttl-tests/swift-ring-rebalance-8hn4p" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.578153 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/375c2b15-d870-4c02-bb26-f7deac6a4e81-dispersionconf\") pod \"swift-ring-rebalance-8hn4p\" (UID: \"375c2b15-d870-4c02-bb26-f7deac6a4e81\") " pod="glance-kuttl-tests/swift-ring-rebalance-8hn4p" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.595688 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxx8c\" (UniqueName: \"kubernetes.io/projected/375c2b15-d870-4c02-bb26-f7deac6a4e81-kube-api-access-xxx8c\") pod \"swift-ring-rebalance-8hn4p\" (UID: \"375c2b15-d870-4c02-bb26-f7deac6a4e81\") " pod="glance-kuttl-tests/swift-ring-rebalance-8hn4p" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.671629 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/256e0580-ba89-475a-a509-e6995259beb7-ring-data-devices\") pod \"256e0580-ba89-475a-a509-e6995259beb7\" (UID: \"256e0580-ba89-475a-a509-e6995259beb7\") " Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.672639 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/256e0580-ba89-475a-a509-e6995259beb7-dispersionconf\") pod \"256e0580-ba89-475a-a509-e6995259beb7\" (UID: \"256e0580-ba89-475a-a509-e6995259beb7\") " Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.672767 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qgqz\" (UniqueName: \"kubernetes.io/projected/256e0580-ba89-475a-a509-e6995259beb7-kube-api-access-7qgqz\") pod \"256e0580-ba89-475a-a509-e6995259beb7\" (UID: \"256e0580-ba89-475a-a509-e6995259beb7\") " Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.672940 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/256e0580-ba89-475a-a509-e6995259beb7-etc-swift\") pod \"256e0580-ba89-475a-a509-e6995259beb7\" (UID: \"256e0580-ba89-475a-a509-e6995259beb7\") " Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.673044 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/256e0580-ba89-475a-a509-e6995259beb7-swiftconf\") pod \"256e0580-ba89-475a-a509-e6995259beb7\" (UID: \"256e0580-ba89-475a-a509-e6995259beb7\") " Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.673244 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/256e0580-ba89-475a-a509-e6995259beb7-scripts\") pod \"256e0580-ba89-475a-a509-e6995259beb7\" (UID: \"256e0580-ba89-475a-a509-e6995259beb7\") " Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.672635 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/256e0580-ba89-475a-a509-e6995259beb7-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "256e0580-ba89-475a-a509-e6995259beb7" (UID: "256e0580-ba89-475a-a509-e6995259beb7"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.674095 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/256e0580-ba89-475a-a509-e6995259beb7-scripts" (OuterVolumeSpecName: "scripts") pod "256e0580-ba89-475a-a509-e6995259beb7" (UID: "256e0580-ba89-475a-a509-e6995259beb7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.674255 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/256e0580-ba89-475a-a509-e6995259beb7-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "256e0580-ba89-475a-a509-e6995259beb7" (UID: "256e0580-ba89-475a-a509-e6995259beb7"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.674477 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/256e0580-ba89-475a-a509-e6995259beb7-scripts\") on node \"crc\" DevicePath \"\"" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.674577 4766 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/256e0580-ba89-475a-a509-e6995259beb7-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.677245 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-8hn4p" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.680842 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/256e0580-ba89-475a-a509-e6995259beb7-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "256e0580-ba89-475a-a509-e6995259beb7" (UID: "256e0580-ba89-475a-a509-e6995259beb7"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.680843 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/256e0580-ba89-475a-a509-e6995259beb7-kube-api-access-7qgqz" (OuterVolumeSpecName: "kube-api-access-7qgqz") pod "256e0580-ba89-475a-a509-e6995259beb7" (UID: "256e0580-ba89-475a-a509-e6995259beb7"). InnerVolumeSpecName "kube-api-access-7qgqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.681034 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/256e0580-ba89-475a-a509-e6995259beb7-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "256e0580-ba89-475a-a509-e6995259beb7" (UID: "256e0580-ba89-475a-a509-e6995259beb7"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.867536 4766 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/256e0580-ba89-475a-a509-e6995259beb7-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.867563 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qgqz\" (UniqueName: \"kubernetes.io/projected/256e0580-ba89-475a-a509-e6995259beb7-kube-api-access-7qgqz\") on node \"crc\" DevicePath \"\"" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.867594 4766 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/256e0580-ba89-475a-a509-e6995259beb7-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.867607 4766 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/256e0580-ba89-475a-a509-e6995259beb7-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.912319 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-index-fr9sf"] Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.914333 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-index-fr9sf" Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.922407 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-index-fr9sf"] Dec 13 04:05:18 crc kubenswrapper[4766]: I1213 04:05:18.927615 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-index-dockercfg-5qdlk" Dec 13 04:05:19 crc kubenswrapper[4766]: I1213 04:05:19.026009 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b755\" (UniqueName: \"kubernetes.io/projected/01969681-994b-4f68-9821-ff37b233fbb1-kube-api-access-2b755\") pod \"glance-operator-index-fr9sf\" (UID: \"01969681-994b-4f68-9821-ff37b233fbb1\") " pod="openstack-operators/glance-operator-index-fr9sf" Dec 13 04:05:19 crc kubenswrapper[4766]: I1213 04:05:19.127938 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b755\" (UniqueName: \"kubernetes.io/projected/01969681-994b-4f68-9821-ff37b233fbb1-kube-api-access-2b755\") pod \"glance-operator-index-fr9sf\" (UID: \"01969681-994b-4f68-9821-ff37b233fbb1\") " pod="openstack-operators/glance-operator-index-fr9sf" Dec 13 04:05:19 crc kubenswrapper[4766]: I1213 04:05:19.149546 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b755\" (UniqueName: \"kubernetes.io/projected/01969681-994b-4f68-9821-ff37b233fbb1-kube-api-access-2b755\") pod \"glance-operator-index-fr9sf\" (UID: \"01969681-994b-4f68-9821-ff37b233fbb1\") " pod="openstack-operators/glance-operator-index-fr9sf" Dec 13 04:05:19 crc kubenswrapper[4766]: I1213 04:05:19.356457 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-index-fr9sf" Dec 13 04:05:19 crc kubenswrapper[4766]: I1213 04:05:19.396776 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-8hn4p"] Dec 13 04:05:19 crc kubenswrapper[4766]: W1213 04:05:19.405821 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod375c2b15_d870_4c02_bb26_f7deac6a4e81.slice/crio-bb85691050eaa3146f4b3162674e004b54d691a98a3244244b970fe8434054e6 WatchSource:0}: Error finding container bb85691050eaa3146f4b3162674e004b54d691a98a3244244b970fe8434054e6: Status 404 returned error can't find the container with id bb85691050eaa3146f4b3162674e004b54d691a98a3244244b970fe8434054e6 Dec 13 04:05:19 crc kubenswrapper[4766]: I1213 04:05:19.519877 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-d5scc" Dec 13 04:05:19 crc kubenswrapper[4766]: I1213 04:05:19.520096 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-ring-rebalance-8hn4p" event={"ID":"375c2b15-d870-4c02-bb26-f7deac6a4e81","Type":"ContainerStarted","Data":"bb85691050eaa3146f4b3162674e004b54d691a98a3244244b970fe8434054e6"} Dec 13 04:05:19 crc kubenswrapper[4766]: I1213 04:05:19.537892 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9d937b76-c14b-462b-8de2-fecf78a9d3cf-etc-swift\") pod \"swift-storage-0\" (UID: \"9d937b76-c14b-462b-8de2-fecf78a9d3cf\") " pod="glance-kuttl-tests/swift-storage-0" Dec 13 04:05:19 crc kubenswrapper[4766]: E1213 04:05:19.538141 4766 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Dec 13 04:05:19 crc kubenswrapper[4766]: E1213 04:05:19.538164 4766 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Dec 13 04:05:19 crc kubenswrapper[4766]: E1213 04:05:19.538219 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d937b76-c14b-462b-8de2-fecf78a9d3cf-etc-swift podName:9d937b76-c14b-462b-8de2-fecf78a9d3cf nodeName:}" failed. No retries permitted until 2025-12-13 04:05:21.538201015 +0000 UTC m=+1253.048133979 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9d937b76-c14b-462b-8de2-fecf78a9d3cf-etc-swift") pod "swift-storage-0" (UID: "9d937b76-c14b-462b-8de2-fecf78a9d3cf") : configmap "swift-ring-files" not found Dec 13 04:05:19 crc kubenswrapper[4766]: I1213 04:05:19.576053 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-d5scc"] Dec 13 04:05:19 crc kubenswrapper[4766]: I1213 04:05:19.585045 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-d5scc"] Dec 13 04:05:19 crc kubenswrapper[4766]: I1213 04:05:19.626061 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="256e0580-ba89-475a-a509-e6995259beb7" path="/var/lib/kubelet/pods/256e0580-ba89-475a-a509-e6995259beb7/volumes" Dec 13 04:05:19 crc kubenswrapper[4766]: I1213 04:05:19.904522 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-index-fr9sf"] Dec 13 04:05:20 crc kubenswrapper[4766]: I1213 04:05:20.535414 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-index-fr9sf" event={"ID":"01969681-994b-4f68-9821-ff37b233fbb1","Type":"ContainerStarted","Data":"c50423afd32c716ef7806c8aeae2759f79349ed5e234146f1a41a54fd2b1c4e5"} Dec 13 04:05:21 crc kubenswrapper[4766]: I1213 04:05:21.633321 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9d937b76-c14b-462b-8de2-fecf78a9d3cf-etc-swift\") pod \"swift-storage-0\" (UID: \"9d937b76-c14b-462b-8de2-fecf78a9d3cf\") " pod="glance-kuttl-tests/swift-storage-0" Dec 13 04:05:21 crc kubenswrapper[4766]: E1213 04:05:21.633525 4766 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Dec 13 04:05:21 crc kubenswrapper[4766]: E1213 04:05:21.633551 4766 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Dec 13 04:05:21 crc kubenswrapper[4766]: E1213 04:05:21.633604 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d937b76-c14b-462b-8de2-fecf78a9d3cf-etc-swift podName:9d937b76-c14b-462b-8de2-fecf78a9d3cf nodeName:}" failed. No retries permitted until 2025-12-13 04:05:25.633584452 +0000 UTC m=+1257.143517416 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9d937b76-c14b-462b-8de2-fecf78a9d3cf-etc-swift") pod "swift-storage-0" (UID: "9d937b76-c14b-462b-8de2-fecf78a9d3cf") : configmap "swift-ring-files" not found Dec 13 04:05:23 crc kubenswrapper[4766]: I1213 04:05:23.222070 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/glance-operator-index-fr9sf"] Dec 13 04:05:23 crc kubenswrapper[4766]: I1213 04:05:23.824483 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-index-zsl6f"] Dec 13 04:05:23 crc kubenswrapper[4766]: I1213 04:05:23.826207 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-index-zsl6f" Dec 13 04:05:23 crc kubenswrapper[4766]: I1213 04:05:23.831561 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-index-zsl6f"] Dec 13 04:05:23 crc kubenswrapper[4766]: I1213 04:05:23.913511 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt9x8\" (UniqueName: \"kubernetes.io/projected/2b382d40-1957-43a4-82bf-8af8648a5857-kube-api-access-tt9x8\") pod \"glance-operator-index-zsl6f\" (UID: \"2b382d40-1957-43a4-82bf-8af8648a5857\") " pod="openstack-operators/glance-operator-index-zsl6f" Dec 13 04:05:24 crc kubenswrapper[4766]: I1213 04:05:24.014938 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt9x8\" (UniqueName: \"kubernetes.io/projected/2b382d40-1957-43a4-82bf-8af8648a5857-kube-api-access-tt9x8\") pod \"glance-operator-index-zsl6f\" (UID: \"2b382d40-1957-43a4-82bf-8af8648a5857\") " pod="openstack-operators/glance-operator-index-zsl6f" Dec 13 04:05:24 crc kubenswrapper[4766]: I1213 04:05:24.033761 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt9x8\" (UniqueName: \"kubernetes.io/projected/2b382d40-1957-43a4-82bf-8af8648a5857-kube-api-access-tt9x8\") pod \"glance-operator-index-zsl6f\" (UID: \"2b382d40-1957-43a4-82bf-8af8648a5857\") " pod="openstack-operators/glance-operator-index-zsl6f" Dec 13 04:05:24 crc kubenswrapper[4766]: I1213 04:05:24.150792 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-index-zsl6f" Dec 13 04:05:24 crc kubenswrapper[4766]: I1213 04:05:24.928476 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-index-zsl6f"] Dec 13 04:05:25 crc kubenswrapper[4766]: I1213 04:05:25.598443 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-ring-rebalance-8hn4p" event={"ID":"375c2b15-d870-4c02-bb26-f7deac6a4e81","Type":"ContainerStarted","Data":"192b293b7cf3fdfa17f9b60a0fea81e4ba03de8c31aa667b5b34e60435c6c4a4"} Dec 13 04:05:25 crc kubenswrapper[4766]: I1213 04:05:25.600441 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-index-zsl6f" event={"ID":"2b382d40-1957-43a4-82bf-8af8648a5857","Type":"ContainerStarted","Data":"4a48435bbc94be586fafff89377f1b16897270a17b9763fe47e432c81cedfccc"} Dec 13 04:05:25 crc kubenswrapper[4766]: I1213 04:05:25.600727 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-index-zsl6f" event={"ID":"2b382d40-1957-43a4-82bf-8af8648a5857","Type":"ContainerStarted","Data":"44df1c0bdb1ab76276abcc156a765c8fc1d6afa891ae9731c191134992bddd5b"} Dec 13 04:05:25 crc kubenswrapper[4766]: I1213 04:05:25.602634 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-index-fr9sf" event={"ID":"01969681-994b-4f68-9821-ff37b233fbb1","Type":"ContainerStarted","Data":"d3a23bf3871a750dd59c249ed4e0fe5b95ec624ca9b9c21b2c370deae03fd7ed"} Dec 13 04:05:25 crc kubenswrapper[4766]: I1213 04:05:25.602784 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/glance-operator-index-fr9sf" podUID="01969681-994b-4f68-9821-ff37b233fbb1" containerName="registry-server" containerID="cri-o://d3a23bf3871a750dd59c249ed4e0fe5b95ec624ca9b9c21b2c370deae03fd7ed" gracePeriod=2 Dec 13 04:05:25 crc kubenswrapper[4766]: I1213 04:05:25.622300 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/swift-ring-rebalance-8hn4p" podStartSLOduration=2.8620269560000002 podStartE2EDuration="7.622280757s" podCreationTimestamp="2025-12-13 04:05:18 +0000 UTC" firstStartedPulling="2025-12-13 04:05:19.40803982 +0000 UTC m=+1250.917972784" lastFinishedPulling="2025-12-13 04:05:24.168293621 +0000 UTC m=+1255.678226585" observedRunningTime="2025-12-13 04:05:25.617155319 +0000 UTC m=+1257.127088283" watchObservedRunningTime="2025-12-13 04:05:25.622280757 +0000 UTC m=+1257.132213721" Dec 13 04:05:25 crc kubenswrapper[4766]: I1213 04:05:25.640624 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-index-fr9sf" podStartSLOduration=3.4071678260000002 podStartE2EDuration="7.640604389s" podCreationTimestamp="2025-12-13 04:05:18 +0000 UTC" firstStartedPulling="2025-12-13 04:05:19.917024121 +0000 UTC m=+1251.426957105" lastFinishedPulling="2025-12-13 04:05:24.150460704 +0000 UTC m=+1255.660393668" observedRunningTime="2025-12-13 04:05:25.636090598 +0000 UTC m=+1257.146023562" watchObservedRunningTime="2025-12-13 04:05:25.640604389 +0000 UTC m=+1257.150537373" Dec 13 04:05:25 crc kubenswrapper[4766]: I1213 04:05:25.659958 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-index-zsl6f" podStartSLOduration=2.6059984050000002 podStartE2EDuration="2.659934579s" podCreationTimestamp="2025-12-13 04:05:23 +0000 UTC" firstStartedPulling="2025-12-13 04:05:24.937120898 +0000 UTC m=+1256.447053862" lastFinishedPulling="2025-12-13 04:05:24.991057062 +0000 UTC m=+1256.500990036" observedRunningTime="2025-12-13 04:05:25.650410973 +0000 UTC m=+1257.160343957" watchObservedRunningTime="2025-12-13 04:05:25.659934579 +0000 UTC m=+1257.169867553" Dec 13 04:05:25 crc kubenswrapper[4766]: I1213 04:05:25.671970 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9d937b76-c14b-462b-8de2-fecf78a9d3cf-etc-swift\") pod \"swift-storage-0\" (UID: \"9d937b76-c14b-462b-8de2-fecf78a9d3cf\") " pod="glance-kuttl-tests/swift-storage-0" Dec 13 04:05:25 crc kubenswrapper[4766]: E1213 04:05:25.672227 4766 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Dec 13 04:05:25 crc kubenswrapper[4766]: E1213 04:05:25.672789 4766 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Dec 13 04:05:25 crc kubenswrapper[4766]: E1213 04:05:25.672877 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d937b76-c14b-462b-8de2-fecf78a9d3cf-etc-swift podName:9d937b76-c14b-462b-8de2-fecf78a9d3cf nodeName:}" failed. No retries permitted until 2025-12-13 04:05:33.672841394 +0000 UTC m=+1265.182774358 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9d937b76-c14b-462b-8de2-fecf78a9d3cf-etc-swift") pod "swift-storage-0" (UID: "9d937b76-c14b-462b-8de2-fecf78a9d3cf") : configmap "swift-ring-files" not found Dec 13 04:05:26 crc kubenswrapper[4766]: I1213 04:05:26.137607 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-index-fr9sf" Dec 13 04:05:26 crc kubenswrapper[4766]: I1213 04:05:26.293530 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b755\" (UniqueName: \"kubernetes.io/projected/01969681-994b-4f68-9821-ff37b233fbb1-kube-api-access-2b755\") pod \"01969681-994b-4f68-9821-ff37b233fbb1\" (UID: \"01969681-994b-4f68-9821-ff37b233fbb1\") " Dec 13 04:05:26 crc kubenswrapper[4766]: I1213 04:05:26.301085 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01969681-994b-4f68-9821-ff37b233fbb1-kube-api-access-2b755" (OuterVolumeSpecName: "kube-api-access-2b755") pod "01969681-994b-4f68-9821-ff37b233fbb1" (UID: "01969681-994b-4f68-9821-ff37b233fbb1"). InnerVolumeSpecName "kube-api-access-2b755". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:05:26 crc kubenswrapper[4766]: I1213 04:05:26.395458 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2b755\" (UniqueName: \"kubernetes.io/projected/01969681-994b-4f68-9821-ff37b233fbb1-kube-api-access-2b755\") on node \"crc\" DevicePath \"\"" Dec 13 04:05:26 crc kubenswrapper[4766]: I1213 04:05:26.612336 4766 generic.go:334] "Generic (PLEG): container finished" podID="01969681-994b-4f68-9821-ff37b233fbb1" containerID="d3a23bf3871a750dd59c249ed4e0fe5b95ec624ca9b9c21b2c370deae03fd7ed" exitCode=0 Dec 13 04:05:26 crc kubenswrapper[4766]: I1213 04:05:26.612451 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-index-fr9sf" Dec 13 04:05:26 crc kubenswrapper[4766]: I1213 04:05:26.612456 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-index-fr9sf" event={"ID":"01969681-994b-4f68-9821-ff37b233fbb1","Type":"ContainerDied","Data":"d3a23bf3871a750dd59c249ed4e0fe5b95ec624ca9b9c21b2c370deae03fd7ed"} Dec 13 04:05:26 crc kubenswrapper[4766]: I1213 04:05:26.612531 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-index-fr9sf" event={"ID":"01969681-994b-4f68-9821-ff37b233fbb1","Type":"ContainerDied","Data":"c50423afd32c716ef7806c8aeae2759f79349ed5e234146f1a41a54fd2b1c4e5"} Dec 13 04:05:26 crc kubenswrapper[4766]: I1213 04:05:26.612582 4766 scope.go:117] "RemoveContainer" containerID="d3a23bf3871a750dd59c249ed4e0fe5b95ec624ca9b9c21b2c370deae03fd7ed" Dec 13 04:05:26 crc kubenswrapper[4766]: I1213 04:05:26.635034 4766 scope.go:117] "RemoveContainer" containerID="d3a23bf3871a750dd59c249ed4e0fe5b95ec624ca9b9c21b2c370deae03fd7ed" Dec 13 04:05:26 crc kubenswrapper[4766]: E1213 04:05:26.635521 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3a23bf3871a750dd59c249ed4e0fe5b95ec624ca9b9c21b2c370deae03fd7ed\": container with ID starting with d3a23bf3871a750dd59c249ed4e0fe5b95ec624ca9b9c21b2c370deae03fd7ed not found: ID does not exist" containerID="d3a23bf3871a750dd59c249ed4e0fe5b95ec624ca9b9c21b2c370deae03fd7ed" Dec 13 04:05:26 crc kubenswrapper[4766]: I1213 04:05:26.635567 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3a23bf3871a750dd59c249ed4e0fe5b95ec624ca9b9c21b2c370deae03fd7ed"} err="failed to get container status \"d3a23bf3871a750dd59c249ed4e0fe5b95ec624ca9b9c21b2c370deae03fd7ed\": rpc error: code = NotFound desc = could not find container \"d3a23bf3871a750dd59c249ed4e0fe5b95ec624ca9b9c21b2c370deae03fd7ed\": container with ID starting with d3a23bf3871a750dd59c249ed4e0fe5b95ec624ca9b9c21b2c370deae03fd7ed not found: ID does not exist" Dec 13 04:05:26 crc kubenswrapper[4766]: I1213 04:05:26.652866 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/glance-operator-index-fr9sf"] Dec 13 04:05:26 crc kubenswrapper[4766]: I1213 04:05:26.660631 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/glance-operator-index-fr9sf"] Dec 13 04:05:27 crc kubenswrapper[4766]: I1213 04:05:27.628120 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01969681-994b-4f68-9821-ff37b233fbb1" path="/var/lib/kubelet/pods/01969681-994b-4f68-9821-ff37b233fbb1/volumes" Dec 13 04:05:30 crc kubenswrapper[4766]: I1213 04:05:30.675417 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/keystone-5cf4ff88f8-pvnzl" Dec 13 04:05:33 crc kubenswrapper[4766]: I1213 04:05:33.675069 4766 generic.go:334] "Generic (PLEG): container finished" podID="375c2b15-d870-4c02-bb26-f7deac6a4e81" containerID="192b293b7cf3fdfa17f9b60a0fea81e4ba03de8c31aa667b5b34e60435c6c4a4" exitCode=0 Dec 13 04:05:33 crc kubenswrapper[4766]: I1213 04:05:33.675158 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-ring-rebalance-8hn4p" event={"ID":"375c2b15-d870-4c02-bb26-f7deac6a4e81","Type":"ContainerDied","Data":"192b293b7cf3fdfa17f9b60a0fea81e4ba03de8c31aa667b5b34e60435c6c4a4"} Dec 13 04:05:33 crc kubenswrapper[4766]: I1213 04:05:33.746964 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9d937b76-c14b-462b-8de2-fecf78a9d3cf-etc-swift\") pod \"swift-storage-0\" (UID: \"9d937b76-c14b-462b-8de2-fecf78a9d3cf\") " pod="glance-kuttl-tests/swift-storage-0" Dec 13 04:05:33 crc kubenswrapper[4766]: I1213 04:05:33.756532 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9d937b76-c14b-462b-8de2-fecf78a9d3cf-etc-swift\") pod \"swift-storage-0\" (UID: \"9d937b76-c14b-462b-8de2-fecf78a9d3cf\") " pod="glance-kuttl-tests/swift-storage-0" Dec 13 04:05:33 crc kubenswrapper[4766]: I1213 04:05:33.883252 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-storage-0" Dec 13 04:05:34 crc kubenswrapper[4766]: I1213 04:05:34.151809 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-index-zsl6f" Dec 13 04:05:34 crc kubenswrapper[4766]: I1213 04:05:34.152148 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/glance-operator-index-zsl6f" Dec 13 04:05:34 crc kubenswrapper[4766]: I1213 04:05:34.188366 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/glance-operator-index-zsl6f" Dec 13 04:05:34 crc kubenswrapper[4766]: I1213 04:05:34.403307 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-storage-0"] Dec 13 04:05:34 crc kubenswrapper[4766]: W1213 04:05:34.404868 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d937b76_c14b_462b_8de2_fecf78a9d3cf.slice/crio-2ddbc7991a4b3fb11fcb5c3f6211246ca76274006f36c5b45fa44d4c43a8d2fe WatchSource:0}: Error finding container 2ddbc7991a4b3fb11fcb5c3f6211246ca76274006f36c5b45fa44d4c43a8d2fe: Status 404 returned error can't find the container with id 2ddbc7991a4b3fb11fcb5c3f6211246ca76274006f36c5b45fa44d4c43a8d2fe Dec 13 04:05:34 crc kubenswrapper[4766]: I1213 04:05:34.636410 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/swift-proxy-8cfd9857-9gfdt"] Dec 13 04:05:34 crc kubenswrapper[4766]: E1213 04:05:34.637282 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01969681-994b-4f68-9821-ff37b233fbb1" containerName="registry-server" Dec 13 04:05:34 crc kubenswrapper[4766]: I1213 04:05:34.637342 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="01969681-994b-4f68-9821-ff37b233fbb1" containerName="registry-server" Dec 13 04:05:34 crc kubenswrapper[4766]: I1213 04:05:34.637567 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="01969681-994b-4f68-9821-ff37b233fbb1" containerName="registry-server" Dec 13 04:05:34 crc kubenswrapper[4766]: I1213 04:05:34.638560 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-proxy-8cfd9857-9gfdt" Dec 13 04:05:34 crc kubenswrapper[4766]: I1213 04:05:34.650458 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-proxy-8cfd9857-9gfdt"] Dec 13 04:05:34 crc kubenswrapper[4766]: I1213 04:05:34.685501 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"9d937b76-c14b-462b-8de2-fecf78a9d3cf","Type":"ContainerStarted","Data":"2ddbc7991a4b3fb11fcb5c3f6211246ca76274006f36c5b45fa44d4c43a8d2fe"} Dec 13 04:05:34 crc kubenswrapper[4766]: I1213 04:05:34.742359 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-index-zsl6f" Dec 13 04:05:34 crc kubenswrapper[4766]: I1213 04:05:34.763353 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d45f9168-9e25-4f26-9c4b-22fff074e16f-etc-swift\") pod \"swift-proxy-8cfd9857-9gfdt\" (UID: \"d45f9168-9e25-4f26-9c4b-22fff074e16f\") " pod="glance-kuttl-tests/swift-proxy-8cfd9857-9gfdt" Dec 13 04:05:34 crc kubenswrapper[4766]: I1213 04:05:34.763409 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d45f9168-9e25-4f26-9c4b-22fff074e16f-log-httpd\") pod \"swift-proxy-8cfd9857-9gfdt\" (UID: \"d45f9168-9e25-4f26-9c4b-22fff074e16f\") " pod="glance-kuttl-tests/swift-proxy-8cfd9857-9gfdt" Dec 13 04:05:34 crc kubenswrapper[4766]: I1213 04:05:34.763624 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d45f9168-9e25-4f26-9c4b-22fff074e16f-config-data\") pod \"swift-proxy-8cfd9857-9gfdt\" (UID: \"d45f9168-9e25-4f26-9c4b-22fff074e16f\") " pod="glance-kuttl-tests/swift-proxy-8cfd9857-9gfdt" Dec 13 04:05:34 crc kubenswrapper[4766]: I1213 04:05:34.763762 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rcdr\" (UniqueName: \"kubernetes.io/projected/d45f9168-9e25-4f26-9c4b-22fff074e16f-kube-api-access-6rcdr\") pod \"swift-proxy-8cfd9857-9gfdt\" (UID: \"d45f9168-9e25-4f26-9c4b-22fff074e16f\") " pod="glance-kuttl-tests/swift-proxy-8cfd9857-9gfdt" Dec 13 04:05:34 crc kubenswrapper[4766]: I1213 04:05:34.763786 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d45f9168-9e25-4f26-9c4b-22fff074e16f-run-httpd\") pod \"swift-proxy-8cfd9857-9gfdt\" (UID: \"d45f9168-9e25-4f26-9c4b-22fff074e16f\") " pod="glance-kuttl-tests/swift-proxy-8cfd9857-9gfdt" Dec 13 04:05:34 crc kubenswrapper[4766]: I1213 04:05:34.865222 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d45f9168-9e25-4f26-9c4b-22fff074e16f-config-data\") pod \"swift-proxy-8cfd9857-9gfdt\" (UID: \"d45f9168-9e25-4f26-9c4b-22fff074e16f\") " pod="glance-kuttl-tests/swift-proxy-8cfd9857-9gfdt" Dec 13 04:05:34 crc kubenswrapper[4766]: I1213 04:05:34.865306 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rcdr\" (UniqueName: \"kubernetes.io/projected/d45f9168-9e25-4f26-9c4b-22fff074e16f-kube-api-access-6rcdr\") pod \"swift-proxy-8cfd9857-9gfdt\" (UID: \"d45f9168-9e25-4f26-9c4b-22fff074e16f\") " pod="glance-kuttl-tests/swift-proxy-8cfd9857-9gfdt" Dec 13 04:05:34 crc kubenswrapper[4766]: I1213 04:05:34.865340 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d45f9168-9e25-4f26-9c4b-22fff074e16f-run-httpd\") pod \"swift-proxy-8cfd9857-9gfdt\" (UID: \"d45f9168-9e25-4f26-9c4b-22fff074e16f\") " pod="glance-kuttl-tests/swift-proxy-8cfd9857-9gfdt" Dec 13 04:05:34 crc kubenswrapper[4766]: I1213 04:05:34.865468 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d45f9168-9e25-4f26-9c4b-22fff074e16f-etc-swift\") pod \"swift-proxy-8cfd9857-9gfdt\" (UID: \"d45f9168-9e25-4f26-9c4b-22fff074e16f\") " pod="glance-kuttl-tests/swift-proxy-8cfd9857-9gfdt" Dec 13 04:05:34 crc kubenswrapper[4766]: I1213 04:05:34.865495 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d45f9168-9e25-4f26-9c4b-22fff074e16f-log-httpd\") pod \"swift-proxy-8cfd9857-9gfdt\" (UID: \"d45f9168-9e25-4f26-9c4b-22fff074e16f\") " pod="glance-kuttl-tests/swift-proxy-8cfd9857-9gfdt" Dec 13 04:05:34 crc kubenswrapper[4766]: I1213 04:05:34.866280 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d45f9168-9e25-4f26-9c4b-22fff074e16f-log-httpd\") pod \"swift-proxy-8cfd9857-9gfdt\" (UID: \"d45f9168-9e25-4f26-9c4b-22fff074e16f\") " pod="glance-kuttl-tests/swift-proxy-8cfd9857-9gfdt" Dec 13 04:05:34 crc kubenswrapper[4766]: I1213 04:05:34.868761 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d45f9168-9e25-4f26-9c4b-22fff074e16f-run-httpd\") pod \"swift-proxy-8cfd9857-9gfdt\" (UID: \"d45f9168-9e25-4f26-9c4b-22fff074e16f\") " pod="glance-kuttl-tests/swift-proxy-8cfd9857-9gfdt" Dec 13 04:05:34 crc kubenswrapper[4766]: I1213 04:05:34.880037 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d45f9168-9e25-4f26-9c4b-22fff074e16f-etc-swift\") pod \"swift-proxy-8cfd9857-9gfdt\" (UID: \"d45f9168-9e25-4f26-9c4b-22fff074e16f\") " pod="glance-kuttl-tests/swift-proxy-8cfd9857-9gfdt" Dec 13 04:05:34 crc kubenswrapper[4766]: I1213 04:05:34.884583 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d45f9168-9e25-4f26-9c4b-22fff074e16f-config-data\") pod \"swift-proxy-8cfd9857-9gfdt\" (UID: \"d45f9168-9e25-4f26-9c4b-22fff074e16f\") " pod="glance-kuttl-tests/swift-proxy-8cfd9857-9gfdt" Dec 13 04:05:34 crc kubenswrapper[4766]: I1213 04:05:34.890722 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rcdr\" (UniqueName: \"kubernetes.io/projected/d45f9168-9e25-4f26-9c4b-22fff074e16f-kube-api-access-6rcdr\") pod \"swift-proxy-8cfd9857-9gfdt\" (UID: \"d45f9168-9e25-4f26-9c4b-22fff074e16f\") " pod="glance-kuttl-tests/swift-proxy-8cfd9857-9gfdt" Dec 13 04:05:34 crc kubenswrapper[4766]: I1213 04:05:34.962158 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-proxy-8cfd9857-9gfdt" Dec 13 04:05:35 crc kubenswrapper[4766]: I1213 04:05:35.037597 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-8hn4p" Dec 13 04:05:35 crc kubenswrapper[4766]: I1213 04:05:35.169499 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/375c2b15-d870-4c02-bb26-f7deac6a4e81-swiftconf\") pod \"375c2b15-d870-4c02-bb26-f7deac6a4e81\" (UID: \"375c2b15-d870-4c02-bb26-f7deac6a4e81\") " Dec 13 04:05:35 crc kubenswrapper[4766]: I1213 04:05:35.169555 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/375c2b15-d870-4c02-bb26-f7deac6a4e81-scripts\") pod \"375c2b15-d870-4c02-bb26-f7deac6a4e81\" (UID: \"375c2b15-d870-4c02-bb26-f7deac6a4e81\") " Dec 13 04:05:35 crc kubenswrapper[4766]: I1213 04:05:35.169703 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxx8c\" (UniqueName: \"kubernetes.io/projected/375c2b15-d870-4c02-bb26-f7deac6a4e81-kube-api-access-xxx8c\") pod \"375c2b15-d870-4c02-bb26-f7deac6a4e81\" (UID: \"375c2b15-d870-4c02-bb26-f7deac6a4e81\") " Dec 13 04:05:35 crc kubenswrapper[4766]: I1213 04:05:35.169743 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/375c2b15-d870-4c02-bb26-f7deac6a4e81-etc-swift\") pod \"375c2b15-d870-4c02-bb26-f7deac6a4e81\" (UID: \"375c2b15-d870-4c02-bb26-f7deac6a4e81\") " Dec 13 04:05:35 crc kubenswrapper[4766]: I1213 04:05:35.169809 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/375c2b15-d870-4c02-bb26-f7deac6a4e81-dispersionconf\") pod \"375c2b15-d870-4c02-bb26-f7deac6a4e81\" (UID: \"375c2b15-d870-4c02-bb26-f7deac6a4e81\") " Dec 13 04:05:35 crc kubenswrapper[4766]: I1213 04:05:35.169842 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/375c2b15-d870-4c02-bb26-f7deac6a4e81-ring-data-devices\") pod \"375c2b15-d870-4c02-bb26-f7deac6a4e81\" (UID: \"375c2b15-d870-4c02-bb26-f7deac6a4e81\") " Dec 13 04:05:35 crc kubenswrapper[4766]: I1213 04:05:35.170693 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/375c2b15-d870-4c02-bb26-f7deac6a4e81-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "375c2b15-d870-4c02-bb26-f7deac6a4e81" (UID: "375c2b15-d870-4c02-bb26-f7deac6a4e81"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:05:35 crc kubenswrapper[4766]: I1213 04:05:35.170942 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/375c2b15-d870-4c02-bb26-f7deac6a4e81-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "375c2b15-d870-4c02-bb26-f7deac6a4e81" (UID: "375c2b15-d870-4c02-bb26-f7deac6a4e81"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 04:05:35 crc kubenswrapper[4766]: I1213 04:05:35.171247 4766 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/375c2b15-d870-4c02-bb26-f7deac6a4e81-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 13 04:05:35 crc kubenswrapper[4766]: I1213 04:05:35.171263 4766 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/375c2b15-d870-4c02-bb26-f7deac6a4e81-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 13 04:05:35 crc kubenswrapper[4766]: I1213 04:05:35.175938 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/375c2b15-d870-4c02-bb26-f7deac6a4e81-kube-api-access-xxx8c" (OuterVolumeSpecName: "kube-api-access-xxx8c") pod "375c2b15-d870-4c02-bb26-f7deac6a4e81" (UID: "375c2b15-d870-4c02-bb26-f7deac6a4e81"). InnerVolumeSpecName "kube-api-access-xxx8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:05:35 crc kubenswrapper[4766]: I1213 04:05:35.180673 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/375c2b15-d870-4c02-bb26-f7deac6a4e81-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "375c2b15-d870-4c02-bb26-f7deac6a4e81" (UID: "375c2b15-d870-4c02-bb26-f7deac6a4e81"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:05:35 crc kubenswrapper[4766]: I1213 04:05:35.190005 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/375c2b15-d870-4c02-bb26-f7deac6a4e81-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "375c2b15-d870-4c02-bb26-f7deac6a4e81" (UID: "375c2b15-d870-4c02-bb26-f7deac6a4e81"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:05:35 crc kubenswrapper[4766]: I1213 04:05:35.192135 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/375c2b15-d870-4c02-bb26-f7deac6a4e81-scripts" (OuterVolumeSpecName: "scripts") pod "375c2b15-d870-4c02-bb26-f7deac6a4e81" (UID: "375c2b15-d870-4c02-bb26-f7deac6a4e81"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 04:05:35 crc kubenswrapper[4766]: I1213 04:05:35.272212 4766 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/375c2b15-d870-4c02-bb26-f7deac6a4e81-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 13 04:05:35 crc kubenswrapper[4766]: I1213 04:05:35.272248 4766 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/375c2b15-d870-4c02-bb26-f7deac6a4e81-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 13 04:05:35 crc kubenswrapper[4766]: I1213 04:05:35.272260 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/375c2b15-d870-4c02-bb26-f7deac6a4e81-scripts\") on node \"crc\" DevicePath \"\"" Dec 13 04:05:35 crc kubenswrapper[4766]: I1213 04:05:35.272270 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxx8c\" (UniqueName: \"kubernetes.io/projected/375c2b15-d870-4c02-bb26-f7deac6a4e81-kube-api-access-xxx8c\") on node \"crc\" DevicePath \"\"" Dec 13 04:05:35 crc kubenswrapper[4766]: I1213 04:05:35.393078 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-proxy-8cfd9857-9gfdt"] Dec 13 04:05:35 crc kubenswrapper[4766]: I1213 04:05:35.693309 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-proxy-8cfd9857-9gfdt" event={"ID":"d45f9168-9e25-4f26-9c4b-22fff074e16f","Type":"ContainerStarted","Data":"c13ba9fb8959c404e164fca9c1abd24dffcc80a373ac067275baaba785276cae"} Dec 13 04:05:35 crc kubenswrapper[4766]: I1213 04:05:35.695151 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-ring-rebalance-8hn4p" event={"ID":"375c2b15-d870-4c02-bb26-f7deac6a4e81","Type":"ContainerDied","Data":"bb85691050eaa3146f4b3162674e004b54d691a98a3244244b970fe8434054e6"} Dec 13 04:05:35 crc kubenswrapper[4766]: I1213 04:05:35.695188 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb85691050eaa3146f4b3162674e004b54d691a98a3244244b970fe8434054e6" Dec 13 04:05:35 crc kubenswrapper[4766]: I1213 04:05:35.695393 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-8hn4p" Dec 13 04:05:36 crc kubenswrapper[4766]: I1213 04:05:36.705830 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-proxy-8cfd9857-9gfdt" event={"ID":"d45f9168-9e25-4f26-9c4b-22fff074e16f","Type":"ContainerStarted","Data":"2075956f35da0b69bf4bf103bf9cc8c445b34b874f2a83892d7b2cfda3f3d886"} Dec 13 04:05:36 crc kubenswrapper[4766]: I1213 04:05:36.706271 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/swift-proxy-8cfd9857-9gfdt" Dec 13 04:05:36 crc kubenswrapper[4766]: I1213 04:05:36.706284 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-proxy-8cfd9857-9gfdt" event={"ID":"d45f9168-9e25-4f26-9c4b-22fff074e16f","Type":"ContainerStarted","Data":"b58409767d6fce3a38d1917f58566d55d91d6b12a7c6c8d477abfd84d894ae89"} Dec 13 04:05:36 crc kubenswrapper[4766]: I1213 04:05:36.706296 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/swift-proxy-8cfd9857-9gfdt" Dec 13 04:05:36 crc kubenswrapper[4766]: I1213 04:05:36.774464 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/swift-proxy-8cfd9857-9gfdt" podStartSLOduration=2.7744397579999998 podStartE2EDuration="2.774439758s" podCreationTimestamp="2025-12-13 04:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 04:05:36.761937356 +0000 UTC m=+1268.271870330" watchObservedRunningTime="2025-12-13 04:05:36.774439758 +0000 UTC m=+1268.284372722" Dec 13 04:05:37 crc kubenswrapper[4766]: I1213 04:05:37.753179 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"9d937b76-c14b-462b-8de2-fecf78a9d3cf","Type":"ContainerStarted","Data":"015c074ef7f89fac7b3209bdcb9c028c51b9906ab983704f71ebaa8106cf4a3b"} Dec 13 04:05:37 crc kubenswrapper[4766]: I1213 04:05:37.754370 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"9d937b76-c14b-462b-8de2-fecf78a9d3cf","Type":"ContainerStarted","Data":"bd0c7235ae892aabc66a2f300f2baa495b9f034ac99c632623c45690999e82d1"} Dec 13 04:05:37 crc kubenswrapper[4766]: I1213 04:05:37.754483 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"9d937b76-c14b-462b-8de2-fecf78a9d3cf","Type":"ContainerStarted","Data":"3bbf53dc5b7fa528bed1047786fd1368905fd104b2b8ab381dc3876aafb1a757"} Dec 13 04:05:38 crc kubenswrapper[4766]: I1213 04:05:38.666243 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f"] Dec 13 04:05:38 crc kubenswrapper[4766]: E1213 04:05:38.666591 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="375c2b15-d870-4c02-bb26-f7deac6a4e81" containerName="swift-ring-rebalance" Dec 13 04:05:38 crc kubenswrapper[4766]: I1213 04:05:38.666608 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="375c2b15-d870-4c02-bb26-f7deac6a4e81" containerName="swift-ring-rebalance" Dec 13 04:05:38 crc kubenswrapper[4766]: I1213 04:05:38.666741 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="375c2b15-d870-4c02-bb26-f7deac6a4e81" containerName="swift-ring-rebalance" Dec 13 04:05:38 crc kubenswrapper[4766]: I1213 04:05:38.668004 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f" Dec 13 04:05:38 crc kubenswrapper[4766]: I1213 04:05:38.671642 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-vg794" Dec 13 04:05:38 crc kubenswrapper[4766]: I1213 04:05:38.698875 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f"] Dec 13 04:05:38 crc kubenswrapper[4766]: I1213 04:05:38.762618 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"9d937b76-c14b-462b-8de2-fecf78a9d3cf","Type":"ContainerStarted","Data":"a06b23697d144b0171a3294088dd2f1468490f8fbe1ab086cfb862c8edb3f3bd"} Dec 13 04:05:38 crc kubenswrapper[4766]: I1213 04:05:38.853992 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5d9c93c6-fb77-4340-a91e-8c70448ddbf8-util\") pod \"ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f\" (UID: \"5d9c93c6-fb77-4340-a91e-8c70448ddbf8\") " pod="openstack-operators/ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f" Dec 13 04:05:38 crc kubenswrapper[4766]: I1213 04:05:38.854078 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhnn6\" (UniqueName: \"kubernetes.io/projected/5d9c93c6-fb77-4340-a91e-8c70448ddbf8-kube-api-access-nhnn6\") pod \"ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f\" (UID: \"5d9c93c6-fb77-4340-a91e-8c70448ddbf8\") " pod="openstack-operators/ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f" Dec 13 04:05:38 crc kubenswrapper[4766]: I1213 04:05:38.854118 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5d9c93c6-fb77-4340-a91e-8c70448ddbf8-bundle\") pod \"ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f\" (UID: \"5d9c93c6-fb77-4340-a91e-8c70448ddbf8\") " pod="openstack-operators/ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f" Dec 13 04:05:38 crc kubenswrapper[4766]: I1213 04:05:38.956902 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhnn6\" (UniqueName: \"kubernetes.io/projected/5d9c93c6-fb77-4340-a91e-8c70448ddbf8-kube-api-access-nhnn6\") pod \"ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f\" (UID: \"5d9c93c6-fb77-4340-a91e-8c70448ddbf8\") " pod="openstack-operators/ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f" Dec 13 04:05:38 crc kubenswrapper[4766]: I1213 04:05:38.956987 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5d9c93c6-fb77-4340-a91e-8c70448ddbf8-bundle\") pod \"ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f\" (UID: \"5d9c93c6-fb77-4340-a91e-8c70448ddbf8\") " pod="openstack-operators/ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f" Dec 13 04:05:38 crc kubenswrapper[4766]: I1213 04:05:38.957079 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5d9c93c6-fb77-4340-a91e-8c70448ddbf8-util\") pod \"ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f\" (UID: \"5d9c93c6-fb77-4340-a91e-8c70448ddbf8\") " pod="openstack-operators/ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f" Dec 13 04:05:38 crc kubenswrapper[4766]: I1213 04:05:38.957694 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5d9c93c6-fb77-4340-a91e-8c70448ddbf8-bundle\") pod \"ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f\" (UID: \"5d9c93c6-fb77-4340-a91e-8c70448ddbf8\") " pod="openstack-operators/ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f" Dec 13 04:05:38 crc kubenswrapper[4766]: I1213 04:05:38.957836 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5d9c93c6-fb77-4340-a91e-8c70448ddbf8-util\") pod \"ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f\" (UID: \"5d9c93c6-fb77-4340-a91e-8c70448ddbf8\") " pod="openstack-operators/ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f" Dec 13 04:05:38 crc kubenswrapper[4766]: I1213 04:05:38.994765 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhnn6\" (UniqueName: \"kubernetes.io/projected/5d9c93c6-fb77-4340-a91e-8c70448ddbf8-kube-api-access-nhnn6\") pod \"ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f\" (UID: \"5d9c93c6-fb77-4340-a91e-8c70448ddbf8\") " pod="openstack-operators/ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f" Dec 13 04:05:39 crc kubenswrapper[4766]: I1213 04:05:39.291822 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f" Dec 13 04:05:39 crc kubenswrapper[4766]: I1213 04:05:39.733700 4766 patch_prober.go:28] interesting pod/machine-config-daemon-94w9l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 04:05:39 crc kubenswrapper[4766]: I1213 04:05:39.734178 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 04:05:39 crc kubenswrapper[4766]: I1213 04:05:39.734232 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" Dec 13 04:05:39 crc kubenswrapper[4766]: I1213 04:05:39.735250 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dc36ef313dc384ef79916b9e90a1008e5893b470ee70f62a783fefe019c14e1a"} pod="openshift-machine-config-operator/machine-config-daemon-94w9l" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 13 04:05:39 crc kubenswrapper[4766]: I1213 04:05:39.735308 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" containerID="cri-o://dc36ef313dc384ef79916b9e90a1008e5893b470ee70f62a783fefe019c14e1a" gracePeriod=600 Dec 13 04:05:40 crc kubenswrapper[4766]: I1213 04:05:40.033156 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f"] Dec 13 04:05:40 crc kubenswrapper[4766]: I1213 04:05:40.792080 4766 generic.go:334] "Generic (PLEG): container finished" podID="5d9c93c6-fb77-4340-a91e-8c70448ddbf8" containerID="6bafe3798a2fb31e7a9d37c0eb6d0234c445957ec97277af2ebf176cf626bce8" exitCode=0 Dec 13 04:05:40 crc kubenswrapper[4766]: I1213 04:05:40.792257 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f" event={"ID":"5d9c93c6-fb77-4340-a91e-8c70448ddbf8","Type":"ContainerDied","Data":"6bafe3798a2fb31e7a9d37c0eb6d0234c445957ec97277af2ebf176cf626bce8"} Dec 13 04:05:40 crc kubenswrapper[4766]: I1213 04:05:40.792480 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f" event={"ID":"5d9c93c6-fb77-4340-a91e-8c70448ddbf8","Type":"ContainerStarted","Data":"6cdfa451857b8718a46ffdb87847b0497e31591f6fda7328ed2a5b8c9b6f79d2"} Dec 13 04:05:40 crc kubenswrapper[4766]: I1213 04:05:40.797918 4766 generic.go:334] "Generic (PLEG): container finished" podID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerID="dc36ef313dc384ef79916b9e90a1008e5893b470ee70f62a783fefe019c14e1a" exitCode=0 Dec 13 04:05:40 crc kubenswrapper[4766]: I1213 04:05:40.797998 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" event={"ID":"71e6a48b-4f5d-4299-9c7b-98dbe11e670e","Type":"ContainerDied","Data":"dc36ef313dc384ef79916b9e90a1008e5893b470ee70f62a783fefe019c14e1a"} Dec 13 04:05:40 crc kubenswrapper[4766]: I1213 04:05:40.798032 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" event={"ID":"71e6a48b-4f5d-4299-9c7b-98dbe11e670e","Type":"ContainerStarted","Data":"1b1133c0e727136c79aaa5f9da8214a41f9cc9ae7bd7f759a1497d0dcecad697"} Dec 13 04:05:40 crc kubenswrapper[4766]: I1213 04:05:40.798070 4766 scope.go:117] "RemoveContainer" containerID="6f383501fdc030c21904f55bdc8f043a9f6b6848b9b670e0fc026beaf3079e7c" Dec 13 04:05:40 crc kubenswrapper[4766]: I1213 04:05:40.804679 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"9d937b76-c14b-462b-8de2-fecf78a9d3cf","Type":"ContainerStarted","Data":"a59522cd412d787d990a5f94ae1e611a3093fd9f1d78ab6770590e181e272699"} Dec 13 04:05:40 crc kubenswrapper[4766]: I1213 04:05:40.804735 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"9d937b76-c14b-462b-8de2-fecf78a9d3cf","Type":"ContainerStarted","Data":"c3d8ae2650b0f2af98857fbb5cfc973776be57ef5fbb743e0f88be6ef51469a3"} Dec 13 04:05:40 crc kubenswrapper[4766]: I1213 04:05:40.804749 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"9d937b76-c14b-462b-8de2-fecf78a9d3cf","Type":"ContainerStarted","Data":"44455cf0721d88fdf5718eddd6508cb5365ef2d22b5fd03d0c1128cc65b490bf"} Dec 13 04:05:41 crc kubenswrapper[4766]: I1213 04:05:41.848062 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"9d937b76-c14b-462b-8de2-fecf78a9d3cf","Type":"ContainerStarted","Data":"c7b54c5c805bf1b3af896099dbdc1189e4fd4b4c98eb9d6547f88f75e46d0246"} Dec 13 04:05:41 crc kubenswrapper[4766]: I1213 04:05:41.854079 4766 generic.go:334] "Generic (PLEG): container finished" podID="5d9c93c6-fb77-4340-a91e-8c70448ddbf8" containerID="8cdf96f15ba373a8cb2988ac9b58718ccd28bcb23b3125e1bc1ece9ac942316c" exitCode=0 Dec 13 04:05:41 crc kubenswrapper[4766]: I1213 04:05:41.854169 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f" event={"ID":"5d9c93c6-fb77-4340-a91e-8c70448ddbf8","Type":"ContainerDied","Data":"8cdf96f15ba373a8cb2988ac9b58718ccd28bcb23b3125e1bc1ece9ac942316c"} Dec 13 04:05:42 crc kubenswrapper[4766]: I1213 04:05:42.881034 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"9d937b76-c14b-462b-8de2-fecf78a9d3cf","Type":"ContainerStarted","Data":"56d69a0a0450a09e5953143276be5d8538997798db4b441ff3454a967441a10b"} Dec 13 04:05:42 crc kubenswrapper[4766]: I1213 04:05:42.881785 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"9d937b76-c14b-462b-8de2-fecf78a9d3cf","Type":"ContainerStarted","Data":"b12321b99c3c7424eab18aa11685162a502a5f06aeadc374d5489401dc0c8241"} Dec 13 04:05:42 crc kubenswrapper[4766]: I1213 04:05:42.881807 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"9d937b76-c14b-462b-8de2-fecf78a9d3cf","Type":"ContainerStarted","Data":"71e43ae6f88cdeb32218994ca7555994fd563b4d946d8443d3efb70492443e71"} Dec 13 04:05:42 crc kubenswrapper[4766]: I1213 04:05:42.884692 4766 generic.go:334] "Generic (PLEG): container finished" podID="5d9c93c6-fb77-4340-a91e-8c70448ddbf8" containerID="4a24c1f06196277029f73e043923d13b6cf59fc0a17e14b153dedd31bd52f217" exitCode=0 Dec 13 04:05:42 crc kubenswrapper[4766]: I1213 04:05:42.884749 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f" event={"ID":"5d9c93c6-fb77-4340-a91e-8c70448ddbf8","Type":"ContainerDied","Data":"4a24c1f06196277029f73e043923d13b6cf59fc0a17e14b153dedd31bd52f217"} Dec 13 04:05:43 crc kubenswrapper[4766]: I1213 04:05:43.900757 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"9d937b76-c14b-462b-8de2-fecf78a9d3cf","Type":"ContainerStarted","Data":"b98e78523d279a29f375fd3442dfdbcb1df177c2bd19dca9148928e81beb7d6c"} Dec 13 04:05:43 crc kubenswrapper[4766]: I1213 04:05:43.901136 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"9d937b76-c14b-462b-8de2-fecf78a9d3cf","Type":"ContainerStarted","Data":"47fb0a4d7b90fdff01b2eabc68381f4760a0db0bf9f680feaad818700d59cc28"} Dec 13 04:05:43 crc kubenswrapper[4766]: I1213 04:05:43.901149 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"9d937b76-c14b-462b-8de2-fecf78a9d3cf","Type":"ContainerStarted","Data":"df40926932bc7b08ff0eaffae660ac701d7b1cf3c660c20a199f1a6ef658f334"} Dec 13 04:05:43 crc kubenswrapper[4766]: I1213 04:05:43.901158 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"9d937b76-c14b-462b-8de2-fecf78a9d3cf","Type":"ContainerStarted","Data":"7c319503b60560f19be6cd06723e433d04af2e763f26ed0754d6c8ca1e7809c0"} Dec 13 04:05:43 crc kubenswrapper[4766]: I1213 04:05:43.944935 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/swift-storage-0" podStartSLOduration=20.006216949 podStartE2EDuration="27.944890668s" podCreationTimestamp="2025-12-13 04:05:16 +0000 UTC" firstStartedPulling="2025-12-13 04:05:34.407004251 +0000 UTC m=+1265.916937215" lastFinishedPulling="2025-12-13 04:05:42.34567797 +0000 UTC m=+1273.855610934" observedRunningTime="2025-12-13 04:05:43.941781848 +0000 UTC m=+1275.451714812" watchObservedRunningTime="2025-12-13 04:05:43.944890668 +0000 UTC m=+1275.454823632" Dec 13 04:05:44 crc kubenswrapper[4766]: I1213 04:05:44.284735 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f" Dec 13 04:05:44 crc kubenswrapper[4766]: I1213 04:05:44.428883 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5d9c93c6-fb77-4340-a91e-8c70448ddbf8-bundle\") pod \"5d9c93c6-fb77-4340-a91e-8c70448ddbf8\" (UID: \"5d9c93c6-fb77-4340-a91e-8c70448ddbf8\") " Dec 13 04:05:44 crc kubenswrapper[4766]: I1213 04:05:44.428982 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5d9c93c6-fb77-4340-a91e-8c70448ddbf8-util\") pod \"5d9c93c6-fb77-4340-a91e-8c70448ddbf8\" (UID: \"5d9c93c6-fb77-4340-a91e-8c70448ddbf8\") " Dec 13 04:05:44 crc kubenswrapper[4766]: I1213 04:05:44.429013 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhnn6\" (UniqueName: \"kubernetes.io/projected/5d9c93c6-fb77-4340-a91e-8c70448ddbf8-kube-api-access-nhnn6\") pod \"5d9c93c6-fb77-4340-a91e-8c70448ddbf8\" (UID: \"5d9c93c6-fb77-4340-a91e-8c70448ddbf8\") " Dec 13 04:05:44 crc kubenswrapper[4766]: I1213 04:05:44.437080 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d9c93c6-fb77-4340-a91e-8c70448ddbf8-bundle" (OuterVolumeSpecName: "bundle") pod "5d9c93c6-fb77-4340-a91e-8c70448ddbf8" (UID: "5d9c93c6-fb77-4340-a91e-8c70448ddbf8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:05:44 crc kubenswrapper[4766]: I1213 04:05:44.440137 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d9c93c6-fb77-4340-a91e-8c70448ddbf8-kube-api-access-nhnn6" (OuterVolumeSpecName: "kube-api-access-nhnn6") pod "5d9c93c6-fb77-4340-a91e-8c70448ddbf8" (UID: "5d9c93c6-fb77-4340-a91e-8c70448ddbf8"). InnerVolumeSpecName "kube-api-access-nhnn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:05:44 crc kubenswrapper[4766]: I1213 04:05:44.454051 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d9c93c6-fb77-4340-a91e-8c70448ddbf8-util" (OuterVolumeSpecName: "util") pod "5d9c93c6-fb77-4340-a91e-8c70448ddbf8" (UID: "5d9c93c6-fb77-4340-a91e-8c70448ddbf8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:05:44 crc kubenswrapper[4766]: I1213 04:05:44.532906 4766 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5d9c93c6-fb77-4340-a91e-8c70448ddbf8-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 04:05:44 crc kubenswrapper[4766]: I1213 04:05:44.532943 4766 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5d9c93c6-fb77-4340-a91e-8c70448ddbf8-util\") on node \"crc\" DevicePath \"\"" Dec 13 04:05:44 crc kubenswrapper[4766]: I1213 04:05:44.532957 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhnn6\" (UniqueName: \"kubernetes.io/projected/5d9c93c6-fb77-4340-a91e-8c70448ddbf8-kube-api-access-nhnn6\") on node \"crc\" DevicePath \"\"" Dec 13 04:05:44 crc kubenswrapper[4766]: I1213 04:05:44.911276 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f" Dec 13 04:05:44 crc kubenswrapper[4766]: I1213 04:05:44.911282 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f" event={"ID":"5d9c93c6-fb77-4340-a91e-8c70448ddbf8","Type":"ContainerDied","Data":"6cdfa451857b8718a46ffdb87847b0497e31591f6fda7328ed2a5b8c9b6f79d2"} Dec 13 04:05:44 crc kubenswrapper[4766]: I1213 04:05:44.911395 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cdfa451857b8718a46ffdb87847b0497e31591f6fda7328ed2a5b8c9b6f79d2" Dec 13 04:05:44 crc kubenswrapper[4766]: I1213 04:05:44.966548 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/swift-proxy-8cfd9857-9gfdt" Dec 13 04:05:44 crc kubenswrapper[4766]: I1213 04:05:44.966933 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/swift-proxy-8cfd9857-9gfdt" Dec 13 04:05:54 crc kubenswrapper[4766]: I1213 04:05:54.491793 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-647f8b4768-2nnpl"] Dec 13 04:05:54 crc kubenswrapper[4766]: E1213 04:05:54.494028 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d9c93c6-fb77-4340-a91e-8c70448ddbf8" containerName="extract" Dec 13 04:05:54 crc kubenswrapper[4766]: I1213 04:05:54.494128 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d9c93c6-fb77-4340-a91e-8c70448ddbf8" containerName="extract" Dec 13 04:05:54 crc kubenswrapper[4766]: E1213 04:05:54.494199 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d9c93c6-fb77-4340-a91e-8c70448ddbf8" containerName="util" Dec 13 04:05:54 crc kubenswrapper[4766]: I1213 04:05:54.494265 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d9c93c6-fb77-4340-a91e-8c70448ddbf8" containerName="util" Dec 13 04:05:54 crc kubenswrapper[4766]: E1213 04:05:54.494333 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d9c93c6-fb77-4340-a91e-8c70448ddbf8" containerName="pull" Dec 13 04:05:54 crc kubenswrapper[4766]: I1213 04:05:54.494400 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d9c93c6-fb77-4340-a91e-8c70448ddbf8" containerName="pull" Dec 13 04:05:54 crc kubenswrapper[4766]: I1213 04:05:54.494705 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d9c93c6-fb77-4340-a91e-8c70448ddbf8" containerName="extract" Dec 13 04:05:54 crc kubenswrapper[4766]: I1213 04:05:54.495911 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-647f8b4768-2nnpl" Dec 13 04:05:54 crc kubenswrapper[4766]: I1213 04:05:54.500728 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-service-cert" Dec 13 04:05:54 crc kubenswrapper[4766]: I1213 04:05:54.502560 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-t2rt6" Dec 13 04:05:54 crc kubenswrapper[4766]: I1213 04:05:54.607577 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/35a443bc-4590-440c-976b-4fce0f7a4467-webhook-cert\") pod \"glance-operator-controller-manager-647f8b4768-2nnpl\" (UID: \"35a443bc-4590-440c-976b-4fce0f7a4467\") " pod="openstack-operators/glance-operator-controller-manager-647f8b4768-2nnpl" Dec 13 04:05:54 crc kubenswrapper[4766]: I1213 04:05:54.607691 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4qtc\" (UniqueName: \"kubernetes.io/projected/35a443bc-4590-440c-976b-4fce0f7a4467-kube-api-access-m4qtc\") pod \"glance-operator-controller-manager-647f8b4768-2nnpl\" (UID: \"35a443bc-4590-440c-976b-4fce0f7a4467\") " pod="openstack-operators/glance-operator-controller-manager-647f8b4768-2nnpl" Dec 13 04:05:54 crc kubenswrapper[4766]: I1213 04:05:54.607738 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/35a443bc-4590-440c-976b-4fce0f7a4467-apiservice-cert\") pod \"glance-operator-controller-manager-647f8b4768-2nnpl\" (UID: \"35a443bc-4590-440c-976b-4fce0f7a4467\") " pod="openstack-operators/glance-operator-controller-manager-647f8b4768-2nnpl" Dec 13 04:05:54 crc kubenswrapper[4766]: I1213 04:05:54.617248 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-647f8b4768-2nnpl"] Dec 13 04:05:54 crc kubenswrapper[4766]: I1213 04:05:54.709115 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4qtc\" (UniqueName: \"kubernetes.io/projected/35a443bc-4590-440c-976b-4fce0f7a4467-kube-api-access-m4qtc\") pod \"glance-operator-controller-manager-647f8b4768-2nnpl\" (UID: \"35a443bc-4590-440c-976b-4fce0f7a4467\") " pod="openstack-operators/glance-operator-controller-manager-647f8b4768-2nnpl" Dec 13 04:05:54 crc kubenswrapper[4766]: I1213 04:05:54.709191 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/35a443bc-4590-440c-976b-4fce0f7a4467-apiservice-cert\") pod \"glance-operator-controller-manager-647f8b4768-2nnpl\" (UID: \"35a443bc-4590-440c-976b-4fce0f7a4467\") " pod="openstack-operators/glance-operator-controller-manager-647f8b4768-2nnpl" Dec 13 04:05:54 crc kubenswrapper[4766]: I1213 04:05:54.709238 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/35a443bc-4590-440c-976b-4fce0f7a4467-webhook-cert\") pod \"glance-operator-controller-manager-647f8b4768-2nnpl\" (UID: \"35a443bc-4590-440c-976b-4fce0f7a4467\") " pod="openstack-operators/glance-operator-controller-manager-647f8b4768-2nnpl" Dec 13 04:05:54 crc kubenswrapper[4766]: I1213 04:05:54.719337 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/35a443bc-4590-440c-976b-4fce0f7a4467-apiservice-cert\") pod \"glance-operator-controller-manager-647f8b4768-2nnpl\" (UID: \"35a443bc-4590-440c-976b-4fce0f7a4467\") " pod="openstack-operators/glance-operator-controller-manager-647f8b4768-2nnpl" Dec 13 04:05:54 crc kubenswrapper[4766]: I1213 04:05:54.733977 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/35a443bc-4590-440c-976b-4fce0f7a4467-webhook-cert\") pod \"glance-operator-controller-manager-647f8b4768-2nnpl\" (UID: \"35a443bc-4590-440c-976b-4fce0f7a4467\") " pod="openstack-operators/glance-operator-controller-manager-647f8b4768-2nnpl" Dec 13 04:05:54 crc kubenswrapper[4766]: I1213 04:05:54.746925 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4qtc\" (UniqueName: \"kubernetes.io/projected/35a443bc-4590-440c-976b-4fce0f7a4467-kube-api-access-m4qtc\") pod \"glance-operator-controller-manager-647f8b4768-2nnpl\" (UID: \"35a443bc-4590-440c-976b-4fce0f7a4467\") " pod="openstack-operators/glance-operator-controller-manager-647f8b4768-2nnpl" Dec 13 04:05:54 crc kubenswrapper[4766]: I1213 04:05:54.819093 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-647f8b4768-2nnpl" Dec 13 04:05:55 crc kubenswrapper[4766]: I1213 04:05:55.218452 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-647f8b4768-2nnpl"] Dec 13 04:05:55 crc kubenswrapper[4766]: W1213 04:05:55.225794 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35a443bc_4590_440c_976b_4fce0f7a4467.slice/crio-9f002aeb5e35b09c625e7dbc4de289019207ae2b5d78e4906bdf92cd64438a15 WatchSource:0}: Error finding container 9f002aeb5e35b09c625e7dbc4de289019207ae2b5d78e4906bdf92cd64438a15: Status 404 returned error can't find the container with id 9f002aeb5e35b09c625e7dbc4de289019207ae2b5d78e4906bdf92cd64438a15 Dec 13 04:05:56 crc kubenswrapper[4766]: I1213 04:05:56.008124 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-647f8b4768-2nnpl" event={"ID":"35a443bc-4590-440c-976b-4fce0f7a4467","Type":"ContainerStarted","Data":"9f002aeb5e35b09c625e7dbc4de289019207ae2b5d78e4906bdf92cd64438a15"} Dec 13 04:05:57 crc kubenswrapper[4766]: I1213 04:05:57.016175 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-647f8b4768-2nnpl" event={"ID":"35a443bc-4590-440c-976b-4fce0f7a4467","Type":"ContainerStarted","Data":"4759996fea0e3af93155e1d05590cc3758a9ed0be55b32602ce6211c9c86ebc0"} Dec 13 04:05:58 crc kubenswrapper[4766]: I1213 04:05:58.041084 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-647f8b4768-2nnpl" event={"ID":"35a443bc-4590-440c-976b-4fce0f7a4467","Type":"ContainerStarted","Data":"13adc72098c250971972e58b8955c57ef9ee50be1e048b65367b1e44fe01493b"} Dec 13 04:05:58 crc kubenswrapper[4766]: I1213 04:05:58.041482 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-647f8b4768-2nnpl" Dec 13 04:05:58 crc kubenswrapper[4766]: I1213 04:05:58.073764 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-647f8b4768-2nnpl" podStartSLOduration=2.013156086 podStartE2EDuration="4.073742825s" podCreationTimestamp="2025-12-13 04:05:54 +0000 UTC" firstStartedPulling="2025-12-13 04:05:55.228958505 +0000 UTC m=+1286.738891469" lastFinishedPulling="2025-12-13 04:05:57.289545244 +0000 UTC m=+1288.799478208" observedRunningTime="2025-12-13 04:05:58.067613088 +0000 UTC m=+1289.577546062" watchObservedRunningTime="2025-12-13 04:05:58.073742825 +0000 UTC m=+1289.583675789" Dec 13 04:06:04 crc kubenswrapper[4766]: I1213 04:06:04.824569 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-647f8b4768-2nnpl" Dec 13 04:06:08 crc kubenswrapper[4766]: I1213 04:06:08.586618 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-create-ztsg8"] Dec 13 04:06:08 crc kubenswrapper[4766]: I1213 04:06:08.588134 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-ztsg8" Dec 13 04:06:08 crc kubenswrapper[4766]: I1213 04:06:08.593751 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-ztsg8"] Dec 13 04:06:08 crc kubenswrapper[4766]: I1213 04:06:08.640150 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/openstackclient"] Dec 13 04:06:08 crc kubenswrapper[4766]: I1213 04:06:08.643380 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstackclient" Dec 13 04:06:08 crc kubenswrapper[4766]: I1213 04:06:08.648933 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstackclient"] Dec 13 04:06:08 crc kubenswrapper[4766]: I1213 04:06:08.649407 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"default-dockercfg-qhkws" Dec 13 04:06:08 crc kubenswrapper[4766]: I1213 04:06:08.649612 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-config" Dec 13 04:06:08 crc kubenswrapper[4766]: I1213 04:06:08.649637 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"openstack-config-secret" Dec 13 04:06:08 crc kubenswrapper[4766]: I1213 04:06:08.649664 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-scripts-9db6gc427h" Dec 13 04:06:08 crc kubenswrapper[4766]: I1213 04:06:08.661900 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4md5\" (UniqueName: \"kubernetes.io/projected/442b9064-0df6-4211-92fb-fca28828c3a1-kube-api-access-q4md5\") pod \"glance-db-create-ztsg8\" (UID: \"442b9064-0df6-4211-92fb-fca28828c3a1\") " pod="glance-kuttl-tests/glance-db-create-ztsg8" Dec 13 04:06:08 crc kubenswrapper[4766]: I1213 04:06:08.764026 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd6ff\" (UniqueName: \"kubernetes.io/projected/a2b528cd-5920-43d4-a9b9-11a396597de6-kube-api-access-dd6ff\") pod \"openstackclient\" (UID: \"a2b528cd-5920-43d4-a9b9-11a396597de6\") " pod="glance-kuttl-tests/openstackclient" Dec 13 04:06:08 crc kubenswrapper[4766]: I1213 04:06:08.764134 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a2b528cd-5920-43d4-a9b9-11a396597de6-openstack-config\") pod \"openstackclient\" (UID: \"a2b528cd-5920-43d4-a9b9-11a396597de6\") " pod="glance-kuttl-tests/openstackclient" Dec 13 04:06:08 crc kubenswrapper[4766]: I1213 04:06:08.764185 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4md5\" (UniqueName: \"kubernetes.io/projected/442b9064-0df6-4211-92fb-fca28828c3a1-kube-api-access-q4md5\") pod \"glance-db-create-ztsg8\" (UID: \"442b9064-0df6-4211-92fb-fca28828c3a1\") " pod="glance-kuttl-tests/glance-db-create-ztsg8" Dec 13 04:06:08 crc kubenswrapper[4766]: I1213 04:06:08.764245 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/a2b528cd-5920-43d4-a9b9-11a396597de6-openstack-scripts\") pod \"openstackclient\" (UID: \"a2b528cd-5920-43d4-a9b9-11a396597de6\") " pod="glance-kuttl-tests/openstackclient" Dec 13 04:06:08 crc kubenswrapper[4766]: I1213 04:06:08.764302 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a2b528cd-5920-43d4-a9b9-11a396597de6-openstack-config-secret\") pod \"openstackclient\" (UID: \"a2b528cd-5920-43d4-a9b9-11a396597de6\") " pod="glance-kuttl-tests/openstackclient" Dec 13 04:06:08 crc kubenswrapper[4766]: I1213 04:06:08.795752 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4md5\" (UniqueName: \"kubernetes.io/projected/442b9064-0df6-4211-92fb-fca28828c3a1-kube-api-access-q4md5\") pod \"glance-db-create-ztsg8\" (UID: \"442b9064-0df6-4211-92fb-fca28828c3a1\") " pod="glance-kuttl-tests/glance-db-create-ztsg8" Dec 13 04:06:08 crc kubenswrapper[4766]: I1213 04:06:08.866087 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/a2b528cd-5920-43d4-a9b9-11a396597de6-openstack-scripts\") pod \"openstackclient\" (UID: \"a2b528cd-5920-43d4-a9b9-11a396597de6\") " pod="glance-kuttl-tests/openstackclient" Dec 13 04:06:08 crc kubenswrapper[4766]: I1213 04:06:08.866158 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a2b528cd-5920-43d4-a9b9-11a396597de6-openstack-config-secret\") pod \"openstackclient\" (UID: \"a2b528cd-5920-43d4-a9b9-11a396597de6\") " pod="glance-kuttl-tests/openstackclient" Dec 13 04:06:08 crc kubenswrapper[4766]: I1213 04:06:08.866216 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd6ff\" (UniqueName: \"kubernetes.io/projected/a2b528cd-5920-43d4-a9b9-11a396597de6-kube-api-access-dd6ff\") pod \"openstackclient\" (UID: \"a2b528cd-5920-43d4-a9b9-11a396597de6\") " pod="glance-kuttl-tests/openstackclient" Dec 13 04:06:08 crc kubenswrapper[4766]: I1213 04:06:08.866265 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a2b528cd-5920-43d4-a9b9-11a396597de6-openstack-config\") pod \"openstackclient\" (UID: \"a2b528cd-5920-43d4-a9b9-11a396597de6\") " pod="glance-kuttl-tests/openstackclient" Dec 13 04:06:08 crc kubenswrapper[4766]: I1213 04:06:08.867372 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a2b528cd-5920-43d4-a9b9-11a396597de6-openstack-config\") pod \"openstackclient\" (UID: \"a2b528cd-5920-43d4-a9b9-11a396597de6\") " pod="glance-kuttl-tests/openstackclient" Dec 13 04:06:08 crc kubenswrapper[4766]: I1213 04:06:08.867373 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/a2b528cd-5920-43d4-a9b9-11a396597de6-openstack-scripts\") pod \"openstackclient\" (UID: \"a2b528cd-5920-43d4-a9b9-11a396597de6\") " pod="glance-kuttl-tests/openstackclient" Dec 13 04:06:08 crc kubenswrapper[4766]: I1213 04:06:08.870875 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a2b528cd-5920-43d4-a9b9-11a396597de6-openstack-config-secret\") pod \"openstackclient\" (UID: \"a2b528cd-5920-43d4-a9b9-11a396597de6\") " pod="glance-kuttl-tests/openstackclient" Dec 13 04:06:08 crc kubenswrapper[4766]: I1213 04:06:08.893855 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd6ff\" (UniqueName: \"kubernetes.io/projected/a2b528cd-5920-43d4-a9b9-11a396597de6-kube-api-access-dd6ff\") pod \"openstackclient\" (UID: \"a2b528cd-5920-43d4-a9b9-11a396597de6\") " pod="glance-kuttl-tests/openstackclient" Dec 13 04:06:08 crc kubenswrapper[4766]: I1213 04:06:08.913909 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-ztsg8" Dec 13 04:06:08 crc kubenswrapper[4766]: I1213 04:06:08.958972 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstackclient" Dec 13 04:06:09 crc kubenswrapper[4766]: I1213 04:06:09.512957 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-ztsg8"] Dec 13 04:06:09 crc kubenswrapper[4766]: I1213 04:06:09.564874 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstackclient"] Dec 13 04:06:09 crc kubenswrapper[4766]: W1213 04:06:09.590037 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2b528cd_5920_43d4_a9b9_11a396597de6.slice/crio-d7d1e6d9f8405f1e890ba68e8f1484573db26ed048b39daf466f267e3ed0fe7a WatchSource:0}: Error finding container d7d1e6d9f8405f1e890ba68e8f1484573db26ed048b39daf466f267e3ed0fe7a: Status 404 returned error can't find the container with id d7d1e6d9f8405f1e890ba68e8f1484573db26ed048b39daf466f267e3ed0fe7a Dec 13 04:06:10 crc kubenswrapper[4766]: I1213 04:06:10.151159 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstackclient" event={"ID":"a2b528cd-5920-43d4-a9b9-11a396597de6","Type":"ContainerStarted","Data":"d7d1e6d9f8405f1e890ba68e8f1484573db26ed048b39daf466f267e3ed0fe7a"} Dec 13 04:06:10 crc kubenswrapper[4766]: I1213 04:06:10.152944 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-ztsg8" event={"ID":"442b9064-0df6-4211-92fb-fca28828c3a1","Type":"ContainerStarted","Data":"bcb7b2fbcb377d03b2dac397ca5adef97677e3922a05cd5cff23fcbf24ee4fe7"} Dec 13 04:06:11 crc kubenswrapper[4766]: I1213 04:06:11.164845 4766 generic.go:334] "Generic (PLEG): container finished" podID="442b9064-0df6-4211-92fb-fca28828c3a1" containerID="03dfb66dc33a23e7f51e0e3294f3d4ed1e48650ea4e5df802866e1e3c79092f4" exitCode=0 Dec 13 04:06:11 crc kubenswrapper[4766]: I1213 04:06:11.164922 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-ztsg8" event={"ID":"442b9064-0df6-4211-92fb-fca28828c3a1","Type":"ContainerDied","Data":"03dfb66dc33a23e7f51e0e3294f3d4ed1e48650ea4e5df802866e1e3c79092f4"} Dec 13 04:06:16 crc kubenswrapper[4766]: I1213 04:06:16.034605 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-ztsg8" Dec 13 04:06:16 crc kubenswrapper[4766]: I1213 04:06:16.100374 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4md5\" (UniqueName: \"kubernetes.io/projected/442b9064-0df6-4211-92fb-fca28828c3a1-kube-api-access-q4md5\") pod \"442b9064-0df6-4211-92fb-fca28828c3a1\" (UID: \"442b9064-0df6-4211-92fb-fca28828c3a1\") " Dec 13 04:06:16 crc kubenswrapper[4766]: I1213 04:06:16.108791 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/442b9064-0df6-4211-92fb-fca28828c3a1-kube-api-access-q4md5" (OuterVolumeSpecName: "kube-api-access-q4md5") pod "442b9064-0df6-4211-92fb-fca28828c3a1" (UID: "442b9064-0df6-4211-92fb-fca28828c3a1"). InnerVolumeSpecName "kube-api-access-q4md5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:06:16 crc kubenswrapper[4766]: I1213 04:06:16.205001 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4md5\" (UniqueName: \"kubernetes.io/projected/442b9064-0df6-4211-92fb-fca28828c3a1-kube-api-access-q4md5\") on node \"crc\" DevicePath \"\"" Dec 13 04:06:16 crc kubenswrapper[4766]: I1213 04:06:16.212141 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-ztsg8" event={"ID":"442b9064-0df6-4211-92fb-fca28828c3a1","Type":"ContainerDied","Data":"bcb7b2fbcb377d03b2dac397ca5adef97677e3922a05cd5cff23fcbf24ee4fe7"} Dec 13 04:06:16 crc kubenswrapper[4766]: I1213 04:06:16.212189 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-ztsg8" Dec 13 04:06:16 crc kubenswrapper[4766]: I1213 04:06:16.212183 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcb7b2fbcb377d03b2dac397ca5adef97677e3922a05cd5cff23fcbf24ee4fe7" Dec 13 04:06:22 crc kubenswrapper[4766]: I1213 04:06:22.267837 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstackclient" event={"ID":"a2b528cd-5920-43d4-a9b9-11a396597de6","Type":"ContainerStarted","Data":"20f66e174cbd2bfe8c60710f970c1854ca4e7edfd6f238cb6d958d69fbd0ff2b"} Dec 13 04:06:22 crc kubenswrapper[4766]: I1213 04:06:22.292635 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/openstackclient" podStartSLOduration=1.857447403 podStartE2EDuration="14.292611233s" podCreationTimestamp="2025-12-13 04:06:08 +0000 UTC" firstStartedPulling="2025-12-13 04:06:09.594386104 +0000 UTC m=+1301.104319068" lastFinishedPulling="2025-12-13 04:06:22.029549934 +0000 UTC m=+1313.539482898" observedRunningTime="2025-12-13 04:06:22.284359044 +0000 UTC m=+1313.794292008" watchObservedRunningTime="2025-12-13 04:06:22.292611233 +0000 UTC m=+1313.802544197" Dec 13 04:06:28 crc kubenswrapper[4766]: I1213 04:06:28.620501 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-a79a-account-create-ndmtj"] Dec 13 04:06:28 crc kubenswrapper[4766]: E1213 04:06:28.622170 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="442b9064-0df6-4211-92fb-fca28828c3a1" containerName="mariadb-database-create" Dec 13 04:06:28 crc kubenswrapper[4766]: I1213 04:06:28.622220 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="442b9064-0df6-4211-92fb-fca28828c3a1" containerName="mariadb-database-create" Dec 13 04:06:28 crc kubenswrapper[4766]: I1213 04:06:28.622557 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="442b9064-0df6-4211-92fb-fca28828c3a1" containerName="mariadb-database-create" Dec 13 04:06:28 crc kubenswrapper[4766]: I1213 04:06:28.623324 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-a79a-account-create-ndmtj" Dec 13 04:06:28 crc kubenswrapper[4766]: I1213 04:06:28.629874 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-db-secret" Dec 13 04:06:28 crc kubenswrapper[4766]: I1213 04:06:28.642939 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-a79a-account-create-ndmtj"] Dec 13 04:06:28 crc kubenswrapper[4766]: I1213 04:06:28.812220 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs24c\" (UniqueName: \"kubernetes.io/projected/862dcb98-e52e-4e7e-a77d-ecf08b87fee1-kube-api-access-bs24c\") pod \"glance-a79a-account-create-ndmtj\" (UID: \"862dcb98-e52e-4e7e-a77d-ecf08b87fee1\") " pod="glance-kuttl-tests/glance-a79a-account-create-ndmtj" Dec 13 04:06:28 crc kubenswrapper[4766]: I1213 04:06:28.914033 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs24c\" (UniqueName: \"kubernetes.io/projected/862dcb98-e52e-4e7e-a77d-ecf08b87fee1-kube-api-access-bs24c\") pod \"glance-a79a-account-create-ndmtj\" (UID: \"862dcb98-e52e-4e7e-a77d-ecf08b87fee1\") " pod="glance-kuttl-tests/glance-a79a-account-create-ndmtj" Dec 13 04:06:28 crc kubenswrapper[4766]: I1213 04:06:28.940501 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs24c\" (UniqueName: \"kubernetes.io/projected/862dcb98-e52e-4e7e-a77d-ecf08b87fee1-kube-api-access-bs24c\") pod \"glance-a79a-account-create-ndmtj\" (UID: \"862dcb98-e52e-4e7e-a77d-ecf08b87fee1\") " pod="glance-kuttl-tests/glance-a79a-account-create-ndmtj" Dec 13 04:06:28 crc kubenswrapper[4766]: I1213 04:06:28.950984 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-a79a-account-create-ndmtj" Dec 13 04:06:29 crc kubenswrapper[4766]: I1213 04:06:29.385110 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-a79a-account-create-ndmtj"] Dec 13 04:06:30 crc kubenswrapper[4766]: I1213 04:06:30.330280 4766 generic.go:334] "Generic (PLEG): container finished" podID="862dcb98-e52e-4e7e-a77d-ecf08b87fee1" containerID="5c16391395fcd155be5af5097f62126590fedb412fbb77572304d984b0fe5ec2" exitCode=0 Dec 13 04:06:30 crc kubenswrapper[4766]: I1213 04:06:30.330327 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-a79a-account-create-ndmtj" event={"ID":"862dcb98-e52e-4e7e-a77d-ecf08b87fee1","Type":"ContainerDied","Data":"5c16391395fcd155be5af5097f62126590fedb412fbb77572304d984b0fe5ec2"} Dec 13 04:06:30 crc kubenswrapper[4766]: I1213 04:06:30.330354 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-a79a-account-create-ndmtj" event={"ID":"862dcb98-e52e-4e7e-a77d-ecf08b87fee1","Type":"ContainerStarted","Data":"d6876e79bb5a5627020cbba0b4836a92e1c68d3f993f8296e2cda411954aa45a"} Dec 13 04:06:31 crc kubenswrapper[4766]: I1213 04:06:31.573421 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-a79a-account-create-ndmtj" Dec 13 04:06:31 crc kubenswrapper[4766]: I1213 04:06:31.762058 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs24c\" (UniqueName: \"kubernetes.io/projected/862dcb98-e52e-4e7e-a77d-ecf08b87fee1-kube-api-access-bs24c\") pod \"862dcb98-e52e-4e7e-a77d-ecf08b87fee1\" (UID: \"862dcb98-e52e-4e7e-a77d-ecf08b87fee1\") " Dec 13 04:06:31 crc kubenswrapper[4766]: I1213 04:06:31.768777 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/862dcb98-e52e-4e7e-a77d-ecf08b87fee1-kube-api-access-bs24c" (OuterVolumeSpecName: "kube-api-access-bs24c") pod "862dcb98-e52e-4e7e-a77d-ecf08b87fee1" (UID: "862dcb98-e52e-4e7e-a77d-ecf08b87fee1"). InnerVolumeSpecName "kube-api-access-bs24c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:06:31 crc kubenswrapper[4766]: I1213 04:06:31.864349 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs24c\" (UniqueName: \"kubernetes.io/projected/862dcb98-e52e-4e7e-a77d-ecf08b87fee1-kube-api-access-bs24c\") on node \"crc\" DevicePath \"\"" Dec 13 04:06:32 crc kubenswrapper[4766]: I1213 04:06:32.346479 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-a79a-account-create-ndmtj" event={"ID":"862dcb98-e52e-4e7e-a77d-ecf08b87fee1","Type":"ContainerDied","Data":"d6876e79bb5a5627020cbba0b4836a92e1c68d3f993f8296e2cda411954aa45a"} Dec 13 04:06:32 crc kubenswrapper[4766]: I1213 04:06:32.346885 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6876e79bb5a5627020cbba0b4836a92e1c68d3f993f8296e2cda411954aa45a" Dec 13 04:06:32 crc kubenswrapper[4766]: I1213 04:06:32.346943 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-a79a-account-create-ndmtj" Dec 13 04:06:33 crc kubenswrapper[4766]: I1213 04:06:33.880199 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-sync-fchgr"] Dec 13 04:06:33 crc kubenswrapper[4766]: E1213 04:06:33.880701 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="862dcb98-e52e-4e7e-a77d-ecf08b87fee1" containerName="mariadb-account-create" Dec 13 04:06:33 crc kubenswrapper[4766]: I1213 04:06:33.880720 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="862dcb98-e52e-4e7e-a77d-ecf08b87fee1" containerName="mariadb-account-create" Dec 13 04:06:33 crc kubenswrapper[4766]: I1213 04:06:33.881029 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="862dcb98-e52e-4e7e-a77d-ecf08b87fee1" containerName="mariadb-account-create" Dec 13 04:06:33 crc kubenswrapper[4766]: I1213 04:06:33.881784 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-fchgr" Dec 13 04:06:33 crc kubenswrapper[4766]: I1213 04:06:33.884956 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-config-data" Dec 13 04:06:33 crc kubenswrapper[4766]: I1213 04:06:33.887240 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-fchgr"] Dec 13 04:06:33 crc kubenswrapper[4766]: I1213 04:06:33.888959 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-pw9v6" Dec 13 04:06:33 crc kubenswrapper[4766]: I1213 04:06:33.995629 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/02ccd4a3-96bb-4def-b7a3-78eb39b898c1-db-sync-config-data\") pod \"glance-db-sync-fchgr\" (UID: \"02ccd4a3-96bb-4def-b7a3-78eb39b898c1\") " pod="glance-kuttl-tests/glance-db-sync-fchgr" Dec 13 04:06:33 crc kubenswrapper[4766]: I1213 04:06:33.995774 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qcf4\" (UniqueName: \"kubernetes.io/projected/02ccd4a3-96bb-4def-b7a3-78eb39b898c1-kube-api-access-8qcf4\") pod \"glance-db-sync-fchgr\" (UID: \"02ccd4a3-96bb-4def-b7a3-78eb39b898c1\") " pod="glance-kuttl-tests/glance-db-sync-fchgr" Dec 13 04:06:33 crc kubenswrapper[4766]: I1213 04:06:33.995883 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02ccd4a3-96bb-4def-b7a3-78eb39b898c1-config-data\") pod \"glance-db-sync-fchgr\" (UID: \"02ccd4a3-96bb-4def-b7a3-78eb39b898c1\") " pod="glance-kuttl-tests/glance-db-sync-fchgr" Dec 13 04:06:34 crc kubenswrapper[4766]: I1213 04:06:34.097070 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02ccd4a3-96bb-4def-b7a3-78eb39b898c1-config-data\") pod \"glance-db-sync-fchgr\" (UID: \"02ccd4a3-96bb-4def-b7a3-78eb39b898c1\") " pod="glance-kuttl-tests/glance-db-sync-fchgr" Dec 13 04:06:34 crc kubenswrapper[4766]: I1213 04:06:34.097192 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/02ccd4a3-96bb-4def-b7a3-78eb39b898c1-db-sync-config-data\") pod \"glance-db-sync-fchgr\" (UID: \"02ccd4a3-96bb-4def-b7a3-78eb39b898c1\") " pod="glance-kuttl-tests/glance-db-sync-fchgr" Dec 13 04:06:34 crc kubenswrapper[4766]: I1213 04:06:34.097235 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qcf4\" (UniqueName: \"kubernetes.io/projected/02ccd4a3-96bb-4def-b7a3-78eb39b898c1-kube-api-access-8qcf4\") pod \"glance-db-sync-fchgr\" (UID: \"02ccd4a3-96bb-4def-b7a3-78eb39b898c1\") " pod="glance-kuttl-tests/glance-db-sync-fchgr" Dec 13 04:06:34 crc kubenswrapper[4766]: I1213 04:06:34.102709 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02ccd4a3-96bb-4def-b7a3-78eb39b898c1-config-data\") pod \"glance-db-sync-fchgr\" (UID: \"02ccd4a3-96bb-4def-b7a3-78eb39b898c1\") " pod="glance-kuttl-tests/glance-db-sync-fchgr" Dec 13 04:06:34 crc kubenswrapper[4766]: I1213 04:06:34.103030 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/02ccd4a3-96bb-4def-b7a3-78eb39b898c1-db-sync-config-data\") pod \"glance-db-sync-fchgr\" (UID: \"02ccd4a3-96bb-4def-b7a3-78eb39b898c1\") " pod="glance-kuttl-tests/glance-db-sync-fchgr" Dec 13 04:06:34 crc kubenswrapper[4766]: I1213 04:06:34.121770 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qcf4\" (UniqueName: \"kubernetes.io/projected/02ccd4a3-96bb-4def-b7a3-78eb39b898c1-kube-api-access-8qcf4\") pod \"glance-db-sync-fchgr\" (UID: \"02ccd4a3-96bb-4def-b7a3-78eb39b898c1\") " pod="glance-kuttl-tests/glance-db-sync-fchgr" Dec 13 04:06:34 crc kubenswrapper[4766]: I1213 04:06:34.204694 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-fchgr" Dec 13 04:06:34 crc kubenswrapper[4766]: I1213 04:06:34.707069 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-fchgr"] Dec 13 04:06:35 crc kubenswrapper[4766]: I1213 04:06:35.375889 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-fchgr" event={"ID":"02ccd4a3-96bb-4def-b7a3-78eb39b898c1","Type":"ContainerStarted","Data":"1c2c68adb36090fd026c0d8ad4f099cc123312565945403f8b82596ca3afc840"} Dec 13 04:07:02 crc kubenswrapper[4766]: E1213 04:07:02.062179 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Dec 13 04:07:02 crc kubenswrapper[4766]: E1213 04:07:02.063757 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8qcf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-fchgr_glance-kuttl-tests(02ccd4a3-96bb-4def-b7a3-78eb39b898c1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 13 04:07:02 crc kubenswrapper[4766]: E1213 04:07:02.065015 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="glance-kuttl-tests/glance-db-sync-fchgr" podUID="02ccd4a3-96bb-4def-b7a3-78eb39b898c1" Dec 13 04:07:02 crc kubenswrapper[4766]: E1213 04:07:02.621681 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="glance-kuttl-tests/glance-db-sync-fchgr" podUID="02ccd4a3-96bb-4def-b7a3-78eb39b898c1" Dec 13 04:07:16 crc kubenswrapper[4766]: I1213 04:07:16.748581 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-fchgr" event={"ID":"02ccd4a3-96bb-4def-b7a3-78eb39b898c1","Type":"ContainerStarted","Data":"76d0b99a05fcbeeaad61deae24483bf624e828053d6571f93f2901ef95d74180"} Dec 13 04:07:16 crc kubenswrapper[4766]: I1213 04:07:16.788042 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-sync-fchgr" podStartSLOduration=3.408054882 podStartE2EDuration="43.788012046s" podCreationTimestamp="2025-12-13 04:06:33 +0000 UTC" firstStartedPulling="2025-12-13 04:06:34.727074114 +0000 UTC m=+1326.237007078" lastFinishedPulling="2025-12-13 04:07:15.107031268 +0000 UTC m=+1366.616964242" observedRunningTime="2025-12-13 04:07:16.772116962 +0000 UTC m=+1368.282049946" watchObservedRunningTime="2025-12-13 04:07:16.788012046 +0000 UTC m=+1368.297945010" Dec 13 04:07:24 crc kubenswrapper[4766]: I1213 04:07:24.820066 4766 generic.go:334] "Generic (PLEG): container finished" podID="02ccd4a3-96bb-4def-b7a3-78eb39b898c1" containerID="76d0b99a05fcbeeaad61deae24483bf624e828053d6571f93f2901ef95d74180" exitCode=0 Dec 13 04:07:24 crc kubenswrapper[4766]: I1213 04:07:24.820140 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-fchgr" event={"ID":"02ccd4a3-96bb-4def-b7a3-78eb39b898c1","Type":"ContainerDied","Data":"76d0b99a05fcbeeaad61deae24483bf624e828053d6571f93f2901ef95d74180"} Dec 13 04:07:26 crc kubenswrapper[4766]: I1213 04:07:26.148042 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-fchgr" Dec 13 04:07:26 crc kubenswrapper[4766]: I1213 04:07:26.207313 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02ccd4a3-96bb-4def-b7a3-78eb39b898c1-config-data\") pod \"02ccd4a3-96bb-4def-b7a3-78eb39b898c1\" (UID: \"02ccd4a3-96bb-4def-b7a3-78eb39b898c1\") " Dec 13 04:07:26 crc kubenswrapper[4766]: I1213 04:07:26.207805 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qcf4\" (UniqueName: \"kubernetes.io/projected/02ccd4a3-96bb-4def-b7a3-78eb39b898c1-kube-api-access-8qcf4\") pod \"02ccd4a3-96bb-4def-b7a3-78eb39b898c1\" (UID: \"02ccd4a3-96bb-4def-b7a3-78eb39b898c1\") " Dec 13 04:07:26 crc kubenswrapper[4766]: I1213 04:07:26.214547 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02ccd4a3-96bb-4def-b7a3-78eb39b898c1-kube-api-access-8qcf4" (OuterVolumeSpecName: "kube-api-access-8qcf4") pod "02ccd4a3-96bb-4def-b7a3-78eb39b898c1" (UID: "02ccd4a3-96bb-4def-b7a3-78eb39b898c1"). InnerVolumeSpecName "kube-api-access-8qcf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:07:26 crc kubenswrapper[4766]: I1213 04:07:26.254872 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02ccd4a3-96bb-4def-b7a3-78eb39b898c1-config-data" (OuterVolumeSpecName: "config-data") pod "02ccd4a3-96bb-4def-b7a3-78eb39b898c1" (UID: "02ccd4a3-96bb-4def-b7a3-78eb39b898c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:07:26 crc kubenswrapper[4766]: I1213 04:07:26.309233 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/02ccd4a3-96bb-4def-b7a3-78eb39b898c1-db-sync-config-data\") pod \"02ccd4a3-96bb-4def-b7a3-78eb39b898c1\" (UID: \"02ccd4a3-96bb-4def-b7a3-78eb39b898c1\") " Dec 13 04:07:26 crc kubenswrapper[4766]: I1213 04:07:26.309642 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qcf4\" (UniqueName: \"kubernetes.io/projected/02ccd4a3-96bb-4def-b7a3-78eb39b898c1-kube-api-access-8qcf4\") on node \"crc\" DevicePath \"\"" Dec 13 04:07:26 crc kubenswrapper[4766]: I1213 04:07:26.309664 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02ccd4a3-96bb-4def-b7a3-78eb39b898c1-config-data\") on node \"crc\" DevicePath \"\"" Dec 13 04:07:26 crc kubenswrapper[4766]: I1213 04:07:26.312692 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02ccd4a3-96bb-4def-b7a3-78eb39b898c1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "02ccd4a3-96bb-4def-b7a3-78eb39b898c1" (UID: "02ccd4a3-96bb-4def-b7a3-78eb39b898c1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:07:26 crc kubenswrapper[4766]: I1213 04:07:26.411304 4766 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/02ccd4a3-96bb-4def-b7a3-78eb39b898c1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 13 04:07:26 crc kubenswrapper[4766]: I1213 04:07:26.840115 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-fchgr" event={"ID":"02ccd4a3-96bb-4def-b7a3-78eb39b898c1","Type":"ContainerDied","Data":"1c2c68adb36090fd026c0d8ad4f099cc123312565945403f8b82596ca3afc840"} Dec 13 04:07:26 crc kubenswrapper[4766]: I1213 04:07:26.840163 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c2c68adb36090fd026c0d8ad4f099cc123312565945403f8b82596ca3afc840" Dec 13 04:07:26 crc kubenswrapper[4766]: I1213 04:07:26.840263 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-fchgr" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.260276 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Dec 13 04:07:28 crc kubenswrapper[4766]: E1213 04:07:28.261379 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02ccd4a3-96bb-4def-b7a3-78eb39b898c1" containerName="glance-db-sync" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.261408 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="02ccd4a3-96bb-4def-b7a3-78eb39b898c1" containerName="glance-db-sync" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.261667 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="02ccd4a3-96bb-4def-b7a3-78eb39b898c1" containerName="glance-db-sync" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.264125 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.269261 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-pw9v6" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.269590 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-single-config-data" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.270249 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-scripts" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.273282 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.294277 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.298211 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.333513 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.443905 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.443997 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-scripts\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.444034 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-logs\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.444073 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg97z\" (UniqueName: \"kubernetes.io/projected/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-kube-api-access-mg97z\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.444462 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzphk\" (UniqueName: \"kubernetes.io/projected/730d7329-bdd7-4c4f-ad6b-b091b5521040-kube-api-access-hzphk\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.444654 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-httpd-run\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.444696 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.444856 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-run\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.444919 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-lib-modules\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.444983 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/730d7329-bdd7-4c4f-ad6b-b091b5521040-config-data\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.445021 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.445059 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/730d7329-bdd7-4c4f-ad6b-b091b5521040-logs\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.445110 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-config-data\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.445156 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-dev\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.445195 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/730d7329-bdd7-4c4f-ad6b-b091b5521040-scripts\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.445334 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.445390 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-run\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.445460 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.445486 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-sys\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.445508 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.445551 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/730d7329-bdd7-4c4f-ad6b-b091b5521040-httpd-run\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.445572 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-etc-nvme\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.445680 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-dev\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.445726 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-sys\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.445761 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.445804 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.445878 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-etc-nvme\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.445945 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-lib-modules\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.540526 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Dec 13 04:07:28 crc kubenswrapper[4766]: E1213 04:07:28.541245 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config-data dev etc-iscsi etc-nvme glance glance-cache httpd-run kube-api-access-hzphk lib-modules logs run scripts sys var-locks-brick], unattached volumes=[], failed to process volumes=[]: context canceled" pod="glance-kuttl-tests/glance-default-single-1" podUID="730d7329-bdd7-4c4f-ad6b-b091b5521040" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.547219 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.547265 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-httpd-run\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.547300 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-run\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.547323 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-lib-modules\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.547346 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/730d7329-bdd7-4c4f-ad6b-b091b5521040-config-data\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.547365 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.547380 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/730d7329-bdd7-4c4f-ad6b-b091b5521040-logs\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.547406 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-config-data\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.547440 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-dev\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.547424 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-run\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.547458 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/730d7329-bdd7-4c4f-ad6b-b091b5521040-scripts\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.547520 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.547545 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.547696 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-lib-modules\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.547764 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-dev\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.548013 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") device mount path \"/mnt/openstack/pv07\"" pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.548042 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") device mount path \"/mnt/openstack/pv06\"" pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.548061 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-httpd-run\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.548062 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") device mount path \"/mnt/openstack/pv08\"" pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.548219 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-run\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.548317 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/730d7329-bdd7-4c4f-ad6b-b091b5521040-logs\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.547565 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-run\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.548460 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.548489 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") device mount path \"/mnt/openstack/pv05\"" pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.548522 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-sys\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.548495 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-sys\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.548654 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/730d7329-bdd7-4c4f-ad6b-b091b5521040-httpd-run\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.548677 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-etc-nvme\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.548713 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-dev\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.548742 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-sys\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.548768 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.548801 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.548813 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.548855 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-etc-nvme\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.548861 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-sys\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.548906 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-lib-modules\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.548926 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-dev\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.548941 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.548971 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-scripts\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.548994 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-logs\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.549037 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg97z\" (UniqueName: \"kubernetes.io/projected/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-kube-api-access-mg97z\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.549037 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/730d7329-bdd7-4c4f-ad6b-b091b5521040-httpd-run\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.549047 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-etc-nvme\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.549058 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-etc-nvme\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.549070 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzphk\" (UniqueName: \"kubernetes.io/projected/730d7329-bdd7-4c4f-ad6b-b091b5521040-kube-api-access-hzphk\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.549099 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.549103 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.549167 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-lib-modules\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.549183 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.549464 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-logs\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.554724 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-scripts\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.556807 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/730d7329-bdd7-4c4f-ad6b-b091b5521040-scripts\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.561265 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-config-data\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.561555 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/730d7329-bdd7-4c4f-ad6b-b091b5521040-config-data\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.571985 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.580163 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.580343 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.580538 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzphk\" (UniqueName: \"kubernetes.io/projected/730d7329-bdd7-4c4f-ad6b-b091b5521040-kube-api-access-hzphk\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.585007 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-single-1\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.589791 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg97z\" (UniqueName: \"kubernetes.io/projected/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-kube-api-access-mg97z\") pod \"glance-default-single-0\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.854046 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.865638 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.884350 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.955849 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-run\") pod \"730d7329-bdd7-4c4f-ad6b-b091b5521040\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.956392 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/730d7329-bdd7-4c4f-ad6b-b091b5521040-httpd-run\") pod \"730d7329-bdd7-4c4f-ad6b-b091b5521040\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.956460 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/730d7329-bdd7-4c4f-ad6b-b091b5521040-config-data\") pod \"730d7329-bdd7-4c4f-ad6b-b091b5521040\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.956486 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/730d7329-bdd7-4c4f-ad6b-b091b5521040-scripts\") pod \"730d7329-bdd7-4c4f-ad6b-b091b5521040\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.956566 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-etc-nvme\") pod \"730d7329-bdd7-4c4f-ad6b-b091b5521040\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.956636 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"730d7329-bdd7-4c4f-ad6b-b091b5521040\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.956689 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/730d7329-bdd7-4c4f-ad6b-b091b5521040-logs\") pod \"730d7329-bdd7-4c4f-ad6b-b091b5521040\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.956734 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-dev\") pod \"730d7329-bdd7-4c4f-ad6b-b091b5521040\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.956761 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-lib-modules\") pod \"730d7329-bdd7-4c4f-ad6b-b091b5521040\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.956792 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzphk\" (UniqueName: \"kubernetes.io/projected/730d7329-bdd7-4c4f-ad6b-b091b5521040-kube-api-access-hzphk\") pod \"730d7329-bdd7-4c4f-ad6b-b091b5521040\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.956823 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-var-locks-brick\") pod \"730d7329-bdd7-4c4f-ad6b-b091b5521040\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.956858 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-sys\") pod \"730d7329-bdd7-4c4f-ad6b-b091b5521040\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.956878 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"730d7329-bdd7-4c4f-ad6b-b091b5521040\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.956923 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-etc-iscsi\") pod \"730d7329-bdd7-4c4f-ad6b-b091b5521040\" (UID: \"730d7329-bdd7-4c4f-ad6b-b091b5521040\") " Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.957342 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "730d7329-bdd7-4c4f-ad6b-b091b5521040" (UID: "730d7329-bdd7-4c4f-ad6b-b091b5521040"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.957391 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-run" (OuterVolumeSpecName: "run") pod "730d7329-bdd7-4c4f-ad6b-b091b5521040" (UID: "730d7329-bdd7-4c4f-ad6b-b091b5521040"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.957681 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/730d7329-bdd7-4c4f-ad6b-b091b5521040-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "730d7329-bdd7-4c4f-ad6b-b091b5521040" (UID: "730d7329-bdd7-4c4f-ad6b-b091b5521040"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.958451 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-dev" (OuterVolumeSpecName: "dev") pod "730d7329-bdd7-4c4f-ad6b-b091b5521040" (UID: "730d7329-bdd7-4c4f-ad6b-b091b5521040"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.958515 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "730d7329-bdd7-4c4f-ad6b-b091b5521040" (UID: "730d7329-bdd7-4c4f-ad6b-b091b5521040"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.958593 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "730d7329-bdd7-4c4f-ad6b-b091b5521040" (UID: "730d7329-bdd7-4c4f-ad6b-b091b5521040"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.958972 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "730d7329-bdd7-4c4f-ad6b-b091b5521040" (UID: "730d7329-bdd7-4c4f-ad6b-b091b5521040"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.959018 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-sys" (OuterVolumeSpecName: "sys") pod "730d7329-bdd7-4c4f-ad6b-b091b5521040" (UID: "730d7329-bdd7-4c4f-ad6b-b091b5521040"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.959247 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/730d7329-bdd7-4c4f-ad6b-b091b5521040-logs" (OuterVolumeSpecName: "logs") pod "730d7329-bdd7-4c4f-ad6b-b091b5521040" (UID: "730d7329-bdd7-4c4f-ad6b-b091b5521040"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.963035 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance-cache") pod "730d7329-bdd7-4c4f-ad6b-b091b5521040" (UID: "730d7329-bdd7-4c4f-ad6b-b091b5521040"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.964046 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/730d7329-bdd7-4c4f-ad6b-b091b5521040-scripts" (OuterVolumeSpecName: "scripts") pod "730d7329-bdd7-4c4f-ad6b-b091b5521040" (UID: "730d7329-bdd7-4c4f-ad6b-b091b5521040"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.964851 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/730d7329-bdd7-4c4f-ad6b-b091b5521040-kube-api-access-hzphk" (OuterVolumeSpecName: "kube-api-access-hzphk") pod "730d7329-bdd7-4c4f-ad6b-b091b5521040" (UID: "730d7329-bdd7-4c4f-ad6b-b091b5521040"). InnerVolumeSpecName "kube-api-access-hzphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.965773 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "730d7329-bdd7-4c4f-ad6b-b091b5521040" (UID: "730d7329-bdd7-4c4f-ad6b-b091b5521040"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 13 04:07:28 crc kubenswrapper[4766]: I1213 04:07:28.966670 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/730d7329-bdd7-4c4f-ad6b-b091b5521040-config-data" (OuterVolumeSpecName: "config-data") pod "730d7329-bdd7-4c4f-ad6b-b091b5521040" (UID: "730d7329-bdd7-4c4f-ad6b-b091b5521040"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:07:29 crc kubenswrapper[4766]: I1213 04:07:29.058844 4766 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-dev\") on node \"crc\" DevicePath \"\"" Dec 13 04:07:29 crc kubenswrapper[4766]: I1213 04:07:29.058886 4766 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-lib-modules\") on node \"crc\" DevicePath \"\"" Dec 13 04:07:29 crc kubenswrapper[4766]: I1213 04:07:29.058900 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzphk\" (UniqueName: \"kubernetes.io/projected/730d7329-bdd7-4c4f-ad6b-b091b5521040-kube-api-access-hzphk\") on node \"crc\" DevicePath \"\"" Dec 13 04:07:29 crc kubenswrapper[4766]: I1213 04:07:29.058909 4766 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-var-locks-brick\") on node \"crc\" DevicePath \"\"" Dec 13 04:07:29 crc kubenswrapper[4766]: I1213 04:07:29.058919 4766 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-sys\") on node \"crc\" DevicePath \"\"" Dec 13 04:07:29 crc kubenswrapper[4766]: I1213 04:07:29.058955 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Dec 13 04:07:29 crc kubenswrapper[4766]: I1213 04:07:29.058967 4766 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-etc-iscsi\") on node \"crc\" DevicePath \"\"" Dec 13 04:07:29 crc kubenswrapper[4766]: I1213 04:07:29.058977 4766 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-run\") on node \"crc\" DevicePath \"\"" Dec 13 04:07:29 crc kubenswrapper[4766]: I1213 04:07:29.058988 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/730d7329-bdd7-4c4f-ad6b-b091b5521040-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 13 04:07:29 crc kubenswrapper[4766]: I1213 04:07:29.058996 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/730d7329-bdd7-4c4f-ad6b-b091b5521040-config-data\") on node \"crc\" DevicePath \"\"" Dec 13 04:07:29 crc kubenswrapper[4766]: I1213 04:07:29.059006 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/730d7329-bdd7-4c4f-ad6b-b091b5521040-scripts\") on node \"crc\" DevicePath \"\"" Dec 13 04:07:29 crc kubenswrapper[4766]: I1213 04:07:29.059014 4766 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/730d7329-bdd7-4c4f-ad6b-b091b5521040-etc-nvme\") on node \"crc\" DevicePath \"\"" Dec 13 04:07:29 crc kubenswrapper[4766]: I1213 04:07:29.059032 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Dec 13 04:07:29 crc kubenswrapper[4766]: I1213 04:07:29.059040 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/730d7329-bdd7-4c4f-ad6b-b091b5521040-logs\") on node \"crc\" DevicePath \"\"" Dec 13 04:07:29 crc kubenswrapper[4766]: I1213 04:07:29.076509 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Dec 13 04:07:29 crc kubenswrapper[4766]: I1213 04:07:29.078909 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Dec 13 04:07:29 crc kubenswrapper[4766]: I1213 04:07:29.340294 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Dec 13 04:07:29 crc kubenswrapper[4766]: I1213 04:07:29.340361 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Dec 13 04:07:29 crc kubenswrapper[4766]: I1213 04:07:29.391074 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Dec 13 04:07:29 crc kubenswrapper[4766]: I1213 04:07:29.872001 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"360b0d38-e0ef-4cc8-ae58-cb842d5d5264","Type":"ContainerStarted","Data":"3fbfc348996b10e780e6d8ba9dea184d372b029f8fea276650dcabc3b1a217b4"} Dec 13 04:07:29 crc kubenswrapper[4766]: I1213 04:07:29.872834 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"360b0d38-e0ef-4cc8-ae58-cb842d5d5264","Type":"ContainerStarted","Data":"060603f145ef083878c93c9881adb230ed3521931328784dc6e796e54344ccc6"} Dec 13 04:07:29 crc kubenswrapper[4766]: I1213 04:07:29.872059 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:29 crc kubenswrapper[4766]: I1213 04:07:29.872861 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"360b0d38-e0ef-4cc8-ae58-cb842d5d5264","Type":"ContainerStarted","Data":"4dea7b99574f16be37d439d59698d4494a0b56ad91645bcac0eb16399a71f393"} Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.297290 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-0" podStartSLOduration=3.297266713 podStartE2EDuration="3.297266713s" podCreationTimestamp="2025-12-13 04:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 04:07:30.258067608 +0000 UTC m=+1381.768000572" watchObservedRunningTime="2025-12-13 04:07:30.297266713 +0000 UTC m=+1381.807199677" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.320137 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.343639 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.356009 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.358363 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.365901 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.438573 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.438683 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.438750 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvgk2\" (UniqueName: \"kubernetes.io/projected/dc903065-1311-4d54-940e-5792987bfed5-kube-api-access-hvgk2\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.438805 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-run\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.438837 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc903065-1311-4d54-940e-5792987bfed5-logs\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.438853 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc903065-1311-4d54-940e-5792987bfed5-scripts\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.439095 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.439261 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-etc-nvme\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.439374 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc903065-1311-4d54-940e-5792987bfed5-httpd-run\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.439467 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc903065-1311-4d54-940e-5792987bfed5-config-data\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.439489 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-lib-modules\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.439515 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-sys\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.439680 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.439723 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-dev\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.685351 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-run\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.685821 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc903065-1311-4d54-940e-5792987bfed5-logs\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.685849 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc903065-1311-4d54-940e-5792987bfed5-scripts\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.685891 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.685928 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-etc-nvme\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.685967 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc903065-1311-4d54-940e-5792987bfed5-httpd-run\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.685995 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc903065-1311-4d54-940e-5792987bfed5-config-data\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.686015 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-lib-modules\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.686036 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-sys\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.686071 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.686089 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-dev\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.686113 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.686134 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.686164 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvgk2\" (UniqueName: \"kubernetes.io/projected/dc903065-1311-4d54-940e-5792987bfed5-kube-api-access-hvgk2\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.686642 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-run\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.687267 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc903065-1311-4d54-940e-5792987bfed5-logs\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.688889 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-lib-modules\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.689124 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") device mount path \"/mnt/openstack/pv05\"" pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.692200 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc903065-1311-4d54-940e-5792987bfed5-httpd-run\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.692282 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-etc-nvme\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.699560 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-sys\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.699638 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.699663 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-dev\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.699810 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") device mount path \"/mnt/openstack/pv06\"" pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.701040 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc903065-1311-4d54-940e-5792987bfed5-config-data\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.701461 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.701925 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc903065-1311-4d54-940e-5792987bfed5-scripts\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.740344 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvgk2\" (UniqueName: \"kubernetes.io/projected/dc903065-1311-4d54-940e-5792987bfed5-kube-api-access-hvgk2\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.751962 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.758011 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-single-1\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:30 crc kubenswrapper[4766]: I1213 04:07:30.989692 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:31 crc kubenswrapper[4766]: I1213 04:07:31.474687 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Dec 13 04:07:31 crc kubenswrapper[4766]: W1213 04:07:31.480489 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc903065_1311_4d54_940e_5792987bfed5.slice/crio-7e9944d4bd65a87d849782d871788430369ee463975417b2cf1cb16e4981f4cc WatchSource:0}: Error finding container 7e9944d4bd65a87d849782d871788430369ee463975417b2cf1cb16e4981f4cc: Status 404 returned error can't find the container with id 7e9944d4bd65a87d849782d871788430369ee463975417b2cf1cb16e4981f4cc Dec 13 04:07:31 crc kubenswrapper[4766]: I1213 04:07:31.630327 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="730d7329-bdd7-4c4f-ad6b-b091b5521040" path="/var/lib/kubelet/pods/730d7329-bdd7-4c4f-ad6b-b091b5521040/volumes" Dec 13 04:07:31 crc kubenswrapper[4766]: I1213 04:07:31.911774 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"dc903065-1311-4d54-940e-5792987bfed5","Type":"ContainerStarted","Data":"5708dc6da2f745bdb2cf32445b3002dc13675f55530bc7e31c1feddc9c108e22"} Dec 13 04:07:31 crc kubenswrapper[4766]: I1213 04:07:31.912316 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"dc903065-1311-4d54-940e-5792987bfed5","Type":"ContainerStarted","Data":"7e9944d4bd65a87d849782d871788430369ee463975417b2cf1cb16e4981f4cc"} Dec 13 04:07:32 crc kubenswrapper[4766]: I1213 04:07:32.925830 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"dc903065-1311-4d54-940e-5792987bfed5","Type":"ContainerStarted","Data":"7346a1e30cd416ab05861573f6b507b6713b970f26fa7a836b02efa8ebb0b6a3"} Dec 13 04:07:32 crc kubenswrapper[4766]: I1213 04:07:32.956514 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-1" podStartSLOduration=2.956484989 podStartE2EDuration="2.956484989s" podCreationTimestamp="2025-12-13 04:07:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 04:07:32.947220529 +0000 UTC m=+1384.457153493" watchObservedRunningTime="2025-12-13 04:07:32.956484989 +0000 UTC m=+1384.466417953" Dec 13 04:07:34 crc kubenswrapper[4766]: I1213 04:07:34.519867 4766 scope.go:117] "RemoveContainer" containerID="fe2e59f8f1146c05be5c010e7af427dc596e19708401cb87547728781f4ac317" Dec 13 04:07:34 crc kubenswrapper[4766]: I1213 04:07:34.543332 4766 scope.go:117] "RemoveContainer" containerID="03336d11171b6ccc893eebd5ad46a550320086e8f43c2d460cae0c12f43bd111" Dec 13 04:07:34 crc kubenswrapper[4766]: I1213 04:07:34.575291 4766 scope.go:117] "RemoveContainer" containerID="4fefa8977b36e78768aaef0a6dd432f2cad8ae118bf39240630c7de6d1ab61c5" Dec 13 04:07:38 crc kubenswrapper[4766]: I1213 04:07:38.884766 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:38 crc kubenswrapper[4766]: I1213 04:07:38.885569 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:38 crc kubenswrapper[4766]: I1213 04:07:38.924280 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:38 crc kubenswrapper[4766]: I1213 04:07:38.930700 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:38 crc kubenswrapper[4766]: I1213 04:07:38.993276 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:38 crc kubenswrapper[4766]: I1213 04:07:38.993377 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:40 crc kubenswrapper[4766]: I1213 04:07:40.989964 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:40 crc kubenswrapper[4766]: I1213 04:07:40.990329 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:41 crc kubenswrapper[4766]: I1213 04:07:41.125955 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:41 crc kubenswrapper[4766]: I1213 04:07:41.126488 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:41 crc kubenswrapper[4766]: I1213 04:07:41.138367 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:42 crc kubenswrapper[4766]: I1213 04:07:42.111185 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:42 crc kubenswrapper[4766]: I1213 04:07:42.111675 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 04:07:42 crc kubenswrapper[4766]: I1213 04:07:42.113449 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:42 crc kubenswrapper[4766]: I1213 04:07:42.249411 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:43 crc kubenswrapper[4766]: I1213 04:07:43.122095 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 04:07:43 crc kubenswrapper[4766]: I1213 04:07:43.511556 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:43 crc kubenswrapper[4766]: I1213 04:07:43.514420 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:07:43 crc kubenswrapper[4766]: I1213 04:07:43.584524 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Dec 13 04:07:44 crc kubenswrapper[4766]: I1213 04:07:44.131342 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="360b0d38-e0ef-4cc8-ae58-cb842d5d5264" containerName="glance-log" containerID="cri-o://060603f145ef083878c93c9881adb230ed3521931328784dc6e796e54344ccc6" gracePeriod=30 Dec 13 04:07:44 crc kubenswrapper[4766]: I1213 04:07:44.131655 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="360b0d38-e0ef-4cc8-ae58-cb842d5d5264" containerName="glance-httpd" containerID="cri-o://3fbfc348996b10e780e6d8ba9dea184d372b029f8fea276650dcabc3b1a217b4" gracePeriod=30 Dec 13 04:07:44 crc kubenswrapper[4766]: I1213 04:07:44.141816 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-single-0" podUID="360b0d38-e0ef-4cc8-ae58-cb842d5d5264" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.0.102:9292/healthcheck\": EOF" Dec 13 04:07:44 crc kubenswrapper[4766]: I1213 04:07:44.142764 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-single-0" podUID="360b0d38-e0ef-4cc8-ae58-cb842d5d5264" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.0.102:9292/healthcheck\": EOF" Dec 13 04:07:45 crc kubenswrapper[4766]: I1213 04:07:45.145263 4766 generic.go:334] "Generic (PLEG): container finished" podID="360b0d38-e0ef-4cc8-ae58-cb842d5d5264" containerID="060603f145ef083878c93c9881adb230ed3521931328784dc6e796e54344ccc6" exitCode=143 Dec 13 04:07:45 crc kubenswrapper[4766]: I1213 04:07:45.145331 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"360b0d38-e0ef-4cc8-ae58-cb842d5d5264","Type":"ContainerDied","Data":"060603f145ef083878c93c9881adb230ed3521931328784dc6e796e54344ccc6"} Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.030409 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.188545 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-dev\") pod \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.188684 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-etc-nvme\") pod \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.188721 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-logs\") pod \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.188754 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-dev" (OuterVolumeSpecName: "dev") pod "360b0d38-e0ef-4cc8-ae58-cb842d5d5264" (UID: "360b0d38-e0ef-4cc8-ae58-cb842d5d5264"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.188809 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-httpd-run\") pod \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.188839 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.188920 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-lib-modules\") pod \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.188941 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.189004 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "360b0d38-e0ef-4cc8-ae58-cb842d5d5264" (UID: "360b0d38-e0ef-4cc8-ae58-cb842d5d5264"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.189218 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-logs" (OuterVolumeSpecName: "logs") pod "360b0d38-e0ef-4cc8-ae58-cb842d5d5264" (UID: "360b0d38-e0ef-4cc8-ae58-cb842d5d5264"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.189183 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "360b0d38-e0ef-4cc8-ae58-cb842d5d5264" (UID: "360b0d38-e0ef-4cc8-ae58-cb842d5d5264"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.189379 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "360b0d38-e0ef-4cc8-ae58-cb842d5d5264" (UID: "360b0d38-e0ef-4cc8-ae58-cb842d5d5264"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.190071 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-config-data\") pod \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.190125 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-run\") pod \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.190167 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-var-locks-brick\") pod \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.190199 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg97z\" (UniqueName: \"kubernetes.io/projected/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-kube-api-access-mg97z\") pod \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.190234 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-etc-iscsi\") pod \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.190270 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-scripts\") pod \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.190296 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-sys\") pod \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\" (UID: \"360b0d38-e0ef-4cc8-ae58-cb842d5d5264\") " Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.190862 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.190884 4766 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-lib-modules\") on node \"crc\" DevicePath \"\"" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.190897 4766 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-dev\") on node \"crc\" DevicePath \"\"" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.190910 4766 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-etc-nvme\") on node \"crc\" DevicePath \"\"" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.190921 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-logs\") on node \"crc\" DevicePath \"\"" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.190961 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-sys" (OuterVolumeSpecName: "sys") pod "360b0d38-e0ef-4cc8-ae58-cb842d5d5264" (UID: "360b0d38-e0ef-4cc8-ae58-cb842d5d5264"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.191026 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "360b0d38-e0ef-4cc8-ae58-cb842d5d5264" (UID: "360b0d38-e0ef-4cc8-ae58-cb842d5d5264"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.191694 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-run" (OuterVolumeSpecName: "run") pod "360b0d38-e0ef-4cc8-ae58-cb842d5d5264" (UID: "360b0d38-e0ef-4cc8-ae58-cb842d5d5264"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.191755 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "360b0d38-e0ef-4cc8-ae58-cb842d5d5264" (UID: "360b0d38-e0ef-4cc8-ae58-cb842d5d5264"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.198689 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance-cache") pod "360b0d38-e0ef-4cc8-ae58-cb842d5d5264" (UID: "360b0d38-e0ef-4cc8-ae58-cb842d5d5264"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.202608 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-scripts" (OuterVolumeSpecName: "scripts") pod "360b0d38-e0ef-4cc8-ae58-cb842d5d5264" (UID: "360b0d38-e0ef-4cc8-ae58-cb842d5d5264"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.202650 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-kube-api-access-mg97z" (OuterVolumeSpecName: "kube-api-access-mg97z") pod "360b0d38-e0ef-4cc8-ae58-cb842d5d5264" (UID: "360b0d38-e0ef-4cc8-ae58-cb842d5d5264"). InnerVolumeSpecName "kube-api-access-mg97z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.202715 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "360b0d38-e0ef-4cc8-ae58-cb842d5d5264" (UID: "360b0d38-e0ef-4cc8-ae58-cb842d5d5264"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.241641 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-config-data" (OuterVolumeSpecName: "config-data") pod "360b0d38-e0ef-4cc8-ae58-cb842d5d5264" (UID: "360b0d38-e0ef-4cc8-ae58-cb842d5d5264"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.303121 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.303279 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.304472 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-config-data\") on node \"crc\" DevicePath \"\"" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.304537 4766 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-run\") on node \"crc\" DevicePath \"\"" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.304567 4766 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-var-locks-brick\") on node \"crc\" DevicePath \"\"" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.304594 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg97z\" (UniqueName: \"kubernetes.io/projected/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-kube-api-access-mg97z\") on node \"crc\" DevicePath \"\"" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.304611 4766 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-etc-iscsi\") on node \"crc\" DevicePath \"\"" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.304656 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-scripts\") on node \"crc\" DevicePath \"\"" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.304680 4766 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/360b0d38-e0ef-4cc8-ae58-cb842d5d5264-sys\") on node \"crc\" DevicePath \"\"" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.354380 4766 generic.go:334] "Generic (PLEG): container finished" podID="360b0d38-e0ef-4cc8-ae58-cb842d5d5264" containerID="3fbfc348996b10e780e6d8ba9dea184d372b029f8fea276650dcabc3b1a217b4" exitCode=0 Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.354447 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"360b0d38-e0ef-4cc8-ae58-cb842d5d5264","Type":"ContainerDied","Data":"3fbfc348996b10e780e6d8ba9dea184d372b029f8fea276650dcabc3b1a217b4"} Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.354478 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"360b0d38-e0ef-4cc8-ae58-cb842d5d5264","Type":"ContainerDied","Data":"4dea7b99574f16be37d439d59698d4494a0b56ad91645bcac0eb16399a71f393"} Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.354514 4766 scope.go:117] "RemoveContainer" containerID="3fbfc348996b10e780e6d8ba9dea184d372b029f8fea276650dcabc3b1a217b4" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.354674 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.365948 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.384558 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.412738 4766 scope.go:117] "RemoveContainer" containerID="060603f145ef083878c93c9881adb230ed3521931328784dc6e796e54344ccc6" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.413915 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.414629 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.414783 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.434381 4766 scope.go:117] "RemoveContainer" containerID="3fbfc348996b10e780e6d8ba9dea184d372b029f8fea276650dcabc3b1a217b4" Dec 13 04:07:49 crc kubenswrapper[4766]: E1213 04:07:49.435499 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fbfc348996b10e780e6d8ba9dea184d372b029f8fea276650dcabc3b1a217b4\": container with ID starting with 3fbfc348996b10e780e6d8ba9dea184d372b029f8fea276650dcabc3b1a217b4 not found: ID does not exist" containerID="3fbfc348996b10e780e6d8ba9dea184d372b029f8fea276650dcabc3b1a217b4" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.435545 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fbfc348996b10e780e6d8ba9dea184d372b029f8fea276650dcabc3b1a217b4"} err="failed to get container status \"3fbfc348996b10e780e6d8ba9dea184d372b029f8fea276650dcabc3b1a217b4\": rpc error: code = NotFound desc = could not find container \"3fbfc348996b10e780e6d8ba9dea184d372b029f8fea276650dcabc3b1a217b4\": container with ID starting with 3fbfc348996b10e780e6d8ba9dea184d372b029f8fea276650dcabc3b1a217b4 not found: ID does not exist" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.435570 4766 scope.go:117] "RemoveContainer" containerID="060603f145ef083878c93c9881adb230ed3521931328784dc6e796e54344ccc6" Dec 13 04:07:49 crc kubenswrapper[4766]: E1213 04:07:49.436016 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"060603f145ef083878c93c9881adb230ed3521931328784dc6e796e54344ccc6\": container with ID starting with 060603f145ef083878c93c9881adb230ed3521931328784dc6e796e54344ccc6 not found: ID does not exist" containerID="060603f145ef083878c93c9881adb230ed3521931328784dc6e796e54344ccc6" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.436062 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"060603f145ef083878c93c9881adb230ed3521931328784dc6e796e54344ccc6"} err="failed to get container status \"060603f145ef083878c93c9881adb230ed3521931328784dc6e796e54344ccc6\": rpc error: code = NotFound desc = could not find container \"060603f145ef083878c93c9881adb230ed3521931328784dc6e796e54344ccc6\": container with ID starting with 060603f145ef083878c93c9881adb230ed3521931328784dc6e796e54344ccc6 not found: ID does not exist" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.440556 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.452111 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Dec 13 04:07:49 crc kubenswrapper[4766]: E1213 04:07:49.452657 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="360b0d38-e0ef-4cc8-ae58-cb842d5d5264" containerName="glance-log" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.452757 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="360b0d38-e0ef-4cc8-ae58-cb842d5d5264" containerName="glance-log" Dec 13 04:07:49 crc kubenswrapper[4766]: E1213 04:07:49.452846 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="360b0d38-e0ef-4cc8-ae58-cb842d5d5264" containerName="glance-httpd" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.452858 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="360b0d38-e0ef-4cc8-ae58-cb842d5d5264" containerName="glance-httpd" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.453112 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="360b0d38-e0ef-4cc8-ae58-cb842d5d5264" containerName="glance-log" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.453149 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="360b0d38-e0ef-4cc8-ae58-cb842d5d5264" containerName="glance-httpd" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.454338 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.466198 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.617720 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-scripts\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.617802 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-sys\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.617840 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.617867 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-run\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.617940 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-httpd-run\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.617995 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.618022 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.618047 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-lib-modules\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.618078 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-dev\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.618101 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s97g\" (UniqueName: \"kubernetes.io/projected/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-kube-api-access-4s97g\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.618150 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.618192 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-logs\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.618237 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-etc-nvme\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.618259 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-config-data\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.625587 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="360b0d38-e0ef-4cc8-ae58-cb842d5d5264" path="/var/lib/kubelet/pods/360b0d38-e0ef-4cc8-ae58-cb842d5d5264/volumes" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.720067 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-etc-nvme\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.720117 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-config-data\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.720155 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-scripts\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.720173 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-sys\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.720191 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.720207 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-run\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.720238 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-httpd-run\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.720283 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.720302 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.720320 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-lib-modules\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.720342 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-dev\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.720362 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s97g\" (UniqueName: \"kubernetes.io/projected/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-kube-api-access-4s97g\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.720381 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.720409 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-logs\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.720947 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-logs\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.721003 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-etc-nvme\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.721701 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-dev\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.721861 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") device mount path \"/mnt/openstack/pv08\"" pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.724904 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.724951 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-sys\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.725015 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-run\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.725107 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.725124 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-lib-modules\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.725476 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") device mount path \"/mnt/openstack/pv07\"" pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.725665 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-httpd-run\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.733040 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-config-data\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.733694 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-scripts\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.763755 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s97g\" (UniqueName: \"kubernetes.io/projected/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-kube-api-access-4s97g\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.765948 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:49 crc kubenswrapper[4766]: I1213 04:07:49.781370 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-single-0\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:50 crc kubenswrapper[4766]: I1213 04:07:50.073904 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:07:50 crc kubenswrapper[4766]: I1213 04:07:50.396011 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Dec 13 04:07:50 crc kubenswrapper[4766]: W1213 04:07:50.410328 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf45b4f61_69d9_4cc0_b6e4_f6ddf6e8e7e2.slice/crio-45a981aface99777b0fdc9da5fe60eb745bb07fa6c41bf0f8d5e8f17c7094257 WatchSource:0}: Error finding container 45a981aface99777b0fdc9da5fe60eb745bb07fa6c41bf0f8d5e8f17c7094257: Status 404 returned error can't find the container with id 45a981aface99777b0fdc9da5fe60eb745bb07fa6c41bf0f8d5e8f17c7094257 Dec 13 04:07:51 crc kubenswrapper[4766]: I1213 04:07:51.390654 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2","Type":"ContainerStarted","Data":"9d835e30fa9e83b24d4d78e344f830b2e42c079c50c08f928af941ca4cf425fb"} Dec 13 04:07:51 crc kubenswrapper[4766]: I1213 04:07:51.391351 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2","Type":"ContainerStarted","Data":"d5b8c49b8d48d78f007247427e6a1fc34a6291ea5d5b70768744c688a55ea5c7"} Dec 13 04:07:51 crc kubenswrapper[4766]: I1213 04:07:51.391362 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2","Type":"ContainerStarted","Data":"45a981aface99777b0fdc9da5fe60eb745bb07fa6c41bf0f8d5e8f17c7094257"} Dec 13 04:07:51 crc kubenswrapper[4766]: I1213 04:07:51.431697 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-0" podStartSLOduration=2.43166887 podStartE2EDuration="2.43166887s" podCreationTimestamp="2025-12-13 04:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 04:07:51.416495427 +0000 UTC m=+1402.926428391" watchObservedRunningTime="2025-12-13 04:07:51.43166887 +0000 UTC m=+1402.941601834" Dec 13 04:08:00 crc kubenswrapper[4766]: I1213 04:08:00.074982 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:08:00 crc kubenswrapper[4766]: I1213 04:08:00.075899 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:08:00 crc kubenswrapper[4766]: I1213 04:08:00.109540 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:08:00 crc kubenswrapper[4766]: I1213 04:08:00.114503 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:08:00 crc kubenswrapper[4766]: I1213 04:08:00.566609 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:08:00 crc kubenswrapper[4766]: I1213 04:08:00.566689 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:08:03 crc kubenswrapper[4766]: I1213 04:08:03.011655 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:08:03 crc kubenswrapper[4766]: I1213 04:08:03.012892 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 04:08:03 crc kubenswrapper[4766]: I1213 04:08:03.112560 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:08:09 crc kubenswrapper[4766]: I1213 04:08:09.732180 4766 patch_prober.go:28] interesting pod/machine-config-daemon-94w9l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 04:08:09 crc kubenswrapper[4766]: I1213 04:08:09.732576 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.006602 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-sync-fchgr"] Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.021838 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-sync-fchgr"] Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.074784 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glancea79a-account-delete-9c5p9"] Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.076203 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancea79a-account-delete-9c5p9" Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.095879 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glancea79a-account-delete-9c5p9"] Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.134524 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.134967 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-1" podUID="dc903065-1311-4d54-940e-5792987bfed5" containerName="glance-log" containerID="cri-o://5708dc6da2f745bdb2cf32445b3002dc13675f55530bc7e31c1feddc9c108e22" gracePeriod=30 Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.135077 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-1" podUID="dc903065-1311-4d54-940e-5792987bfed5" containerName="glance-httpd" containerID="cri-o://7346a1e30cd416ab05861573f6b507b6713b970f26fa7a836b02efa8ebb0b6a3" gracePeriod=30 Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.149385 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.149861 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2" containerName="glance-log" containerID="cri-o://d5b8c49b8d48d78f007247427e6a1fc34a6291ea5d5b70768744c688a55ea5c7" gracePeriod=30 Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.150349 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2" containerName="glance-httpd" containerID="cri-o://9d835e30fa9e83b24d4d78e344f830b2e42c079c50c08f928af941ca4cf425fb" gracePeriod=30 Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.162063 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwnxr\" (UniqueName: \"kubernetes.io/projected/5ead4ca6-eb43-455f-869d-c333aa4c950d-kube-api-access-bwnxr\") pod \"glancea79a-account-delete-9c5p9\" (UID: \"5ead4ca6-eb43-455f-869d-c333aa4c950d\") " pod="glance-kuttl-tests/glancea79a-account-delete-9c5p9" Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.225166 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/openstackclient"] Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.225446 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/openstackclient" podUID="a2b528cd-5920-43d4-a9b9-11a396597de6" containerName="openstackclient" containerID="cri-o://20f66e174cbd2bfe8c60710f970c1854ca4e7edfd6f238cb6d958d69fbd0ff2b" gracePeriod=30 Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.264241 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwnxr\" (UniqueName: \"kubernetes.io/projected/5ead4ca6-eb43-455f-869d-c333aa4c950d-kube-api-access-bwnxr\") pod \"glancea79a-account-delete-9c5p9\" (UID: \"5ead4ca6-eb43-455f-869d-c333aa4c950d\") " pod="glance-kuttl-tests/glancea79a-account-delete-9c5p9" Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.289557 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwnxr\" (UniqueName: \"kubernetes.io/projected/5ead4ca6-eb43-455f-869d-c333aa4c950d-kube-api-access-bwnxr\") pod \"glancea79a-account-delete-9c5p9\" (UID: \"5ead4ca6-eb43-455f-869d-c333aa4c950d\") " pod="glance-kuttl-tests/glancea79a-account-delete-9c5p9" Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.400580 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancea79a-account-delete-9c5p9" Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.684319 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstackclient" Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.733452 4766 generic.go:334] "Generic (PLEG): container finished" podID="dc903065-1311-4d54-940e-5792987bfed5" containerID="5708dc6da2f745bdb2cf32445b3002dc13675f55530bc7e31c1feddc9c108e22" exitCode=143 Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.733627 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"dc903065-1311-4d54-940e-5792987bfed5","Type":"ContainerDied","Data":"5708dc6da2f745bdb2cf32445b3002dc13675f55530bc7e31c1feddc9c108e22"} Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.738289 4766 generic.go:334] "Generic (PLEG): container finished" podID="f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2" containerID="d5b8c49b8d48d78f007247427e6a1fc34a6291ea5d5b70768744c688a55ea5c7" exitCode=143 Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.738411 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2","Type":"ContainerDied","Data":"d5b8c49b8d48d78f007247427e6a1fc34a6291ea5d5b70768744c688a55ea5c7"} Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.741820 4766 generic.go:334] "Generic (PLEG): container finished" podID="a2b528cd-5920-43d4-a9b9-11a396597de6" containerID="20f66e174cbd2bfe8c60710f970c1854ca4e7edfd6f238cb6d958d69fbd0ff2b" exitCode=143 Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.741937 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstackclient" event={"ID":"a2b528cd-5920-43d4-a9b9-11a396597de6","Type":"ContainerDied","Data":"20f66e174cbd2bfe8c60710f970c1854ca4e7edfd6f238cb6d958d69fbd0ff2b"} Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.741977 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstackclient" event={"ID":"a2b528cd-5920-43d4-a9b9-11a396597de6","Type":"ContainerDied","Data":"d7d1e6d9f8405f1e890ba68e8f1484573db26ed048b39daf466f267e3ed0fe7a"} Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.742001 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstackclient" Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.742014 4766 scope.go:117] "RemoveContainer" containerID="20f66e174cbd2bfe8c60710f970c1854ca4e7edfd6f238cb6d958d69fbd0ff2b" Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.771559 4766 scope.go:117] "RemoveContainer" containerID="20f66e174cbd2bfe8c60710f970c1854ca4e7edfd6f238cb6d958d69fbd0ff2b" Dec 13 04:08:18 crc kubenswrapper[4766]: E1213 04:08:18.772121 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20f66e174cbd2bfe8c60710f970c1854ca4e7edfd6f238cb6d958d69fbd0ff2b\": container with ID starting with 20f66e174cbd2bfe8c60710f970c1854ca4e7edfd6f238cb6d958d69fbd0ff2b not found: ID does not exist" containerID="20f66e174cbd2bfe8c60710f970c1854ca4e7edfd6f238cb6d958d69fbd0ff2b" Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.772173 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20f66e174cbd2bfe8c60710f970c1854ca4e7edfd6f238cb6d958d69fbd0ff2b"} err="failed to get container status \"20f66e174cbd2bfe8c60710f970c1854ca4e7edfd6f238cb6d958d69fbd0ff2b\": rpc error: code = NotFound desc = could not find container \"20f66e174cbd2bfe8c60710f970c1854ca4e7edfd6f238cb6d958d69fbd0ff2b\": container with ID starting with 20f66e174cbd2bfe8c60710f970c1854ca4e7edfd6f238cb6d958d69fbd0ff2b not found: ID does not exist" Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.772139 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a2b528cd-5920-43d4-a9b9-11a396597de6-openstack-config\") pod \"a2b528cd-5920-43d4-a9b9-11a396597de6\" (UID: \"a2b528cd-5920-43d4-a9b9-11a396597de6\") " Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.772374 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dd6ff\" (UniqueName: \"kubernetes.io/projected/a2b528cd-5920-43d4-a9b9-11a396597de6-kube-api-access-dd6ff\") pod \"a2b528cd-5920-43d4-a9b9-11a396597de6\" (UID: \"a2b528cd-5920-43d4-a9b9-11a396597de6\") " Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.772445 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a2b528cd-5920-43d4-a9b9-11a396597de6-openstack-config-secret\") pod \"a2b528cd-5920-43d4-a9b9-11a396597de6\" (UID: \"a2b528cd-5920-43d4-a9b9-11a396597de6\") " Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.772510 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/a2b528cd-5920-43d4-a9b9-11a396597de6-openstack-scripts\") pod \"a2b528cd-5920-43d4-a9b9-11a396597de6\" (UID: \"a2b528cd-5920-43d4-a9b9-11a396597de6\") " Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.774618 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2b528cd-5920-43d4-a9b9-11a396597de6-openstack-scripts" (OuterVolumeSpecName: "openstack-scripts") pod "a2b528cd-5920-43d4-a9b9-11a396597de6" (UID: "a2b528cd-5920-43d4-a9b9-11a396597de6"). InnerVolumeSpecName "openstack-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.774918 4766 reconciler_common.go:293] "Volume detached for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/a2b528cd-5920-43d4-a9b9-11a396597de6-openstack-scripts\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.779606 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2b528cd-5920-43d4-a9b9-11a396597de6-kube-api-access-dd6ff" (OuterVolumeSpecName: "kube-api-access-dd6ff") pod "a2b528cd-5920-43d4-a9b9-11a396597de6" (UID: "a2b528cd-5920-43d4-a9b9-11a396597de6"). InnerVolumeSpecName "kube-api-access-dd6ff". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.792825 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2b528cd-5920-43d4-a9b9-11a396597de6-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "a2b528cd-5920-43d4-a9b9-11a396597de6" (UID: "a2b528cd-5920-43d4-a9b9-11a396597de6"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.801973 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b528cd-5920-43d4-a9b9-11a396597de6-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "a2b528cd-5920-43d4-a9b9-11a396597de6" (UID: "a2b528cd-5920-43d4-a9b9-11a396597de6"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.876920 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dd6ff\" (UniqueName: \"kubernetes.io/projected/a2b528cd-5920-43d4-a9b9-11a396597de6-kube-api-access-dd6ff\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.876965 4766 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a2b528cd-5920-43d4-a9b9-11a396597de6-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.877026 4766 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a2b528cd-5920-43d4-a9b9-11a396597de6-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:18 crc kubenswrapper[4766]: I1213 04:08:18.902621 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glancea79a-account-delete-9c5p9"] Dec 13 04:08:19 crc kubenswrapper[4766]: I1213 04:08:19.075304 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/openstackclient"] Dec 13 04:08:19 crc kubenswrapper[4766]: I1213 04:08:19.085155 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/openstackclient"] Dec 13 04:08:19 crc kubenswrapper[4766]: I1213 04:08:19.626002 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02ccd4a3-96bb-4def-b7a3-78eb39b898c1" path="/var/lib/kubelet/pods/02ccd4a3-96bb-4def-b7a3-78eb39b898c1/volumes" Dec 13 04:08:19 crc kubenswrapper[4766]: I1213 04:08:19.627320 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2b528cd-5920-43d4-a9b9-11a396597de6" path="/var/lib/kubelet/pods/a2b528cd-5920-43d4-a9b9-11a396597de6/volumes" Dec 13 04:08:19 crc kubenswrapper[4766]: I1213 04:08:19.753247 4766 generic.go:334] "Generic (PLEG): container finished" podID="5ead4ca6-eb43-455f-869d-c333aa4c950d" containerID="f97744ac9d5e49ce9f571f3a7dd9757d6671f5488a867dd793cb9042c41a3547" exitCode=0 Dec 13 04:08:19 crc kubenswrapper[4766]: I1213 04:08:19.753400 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glancea79a-account-delete-9c5p9" event={"ID":"5ead4ca6-eb43-455f-869d-c333aa4c950d","Type":"ContainerDied","Data":"f97744ac9d5e49ce9f571f3a7dd9757d6671f5488a867dd793cb9042c41a3547"} Dec 13 04:08:19 crc kubenswrapper[4766]: I1213 04:08:19.753544 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glancea79a-account-delete-9c5p9" event={"ID":"5ead4ca6-eb43-455f-869d-c333aa4c950d","Type":"ContainerStarted","Data":"9a45a7d514ed30c264083c461f86a2b7bc8b88819ef71c7569be00efb1ced729"} Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.059763 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancea79a-account-delete-9c5p9" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.111377 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwnxr\" (UniqueName: \"kubernetes.io/projected/5ead4ca6-eb43-455f-869d-c333aa4c950d-kube-api-access-bwnxr\") pod \"5ead4ca6-eb43-455f-869d-c333aa4c950d\" (UID: \"5ead4ca6-eb43-455f-869d-c333aa4c950d\") " Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.118139 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ead4ca6-eb43-455f-869d-c333aa4c950d-kube-api-access-bwnxr" (OuterVolumeSpecName: "kube-api-access-bwnxr") pod "5ead4ca6-eb43-455f-869d-c333aa4c950d" (UID: "5ead4ca6-eb43-455f-869d-c333aa4c950d"). InnerVolumeSpecName "kube-api-access-bwnxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.214153 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwnxr\" (UniqueName: \"kubernetes.io/projected/5ead4ca6-eb43-455f-869d-c333aa4c950d-kube-api-access-bwnxr\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.307814 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-single-0" podUID="f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.0.105:9292/healthcheck\": read tcp 10.217.0.2:48366->10.217.0.105:9292: read: connection reset by peer" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.307879 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-single-0" podUID="f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.0.105:9292/healthcheck\": read tcp 10.217.0.2:48354->10.217.0.105:9292: read: connection reset by peer" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.682118 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.700219 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.724415 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc903065-1311-4d54-940e-5792987bfed5-config-data\") pod \"dc903065-1311-4d54-940e-5792987bfed5\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.724509 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc903065-1311-4d54-940e-5792987bfed5-scripts\") pod \"dc903065-1311-4d54-940e-5792987bfed5\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.724563 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-etc-iscsi\") pod \"dc903065-1311-4d54-940e-5792987bfed5\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.724716 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "dc903065-1311-4d54-940e-5792987bfed5" (UID: "dc903065-1311-4d54-940e-5792987bfed5"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.724880 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-dev" (OuterVolumeSpecName: "dev") pod "dc903065-1311-4d54-940e-5792987bfed5" (UID: "dc903065-1311-4d54-940e-5792987bfed5"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.724590 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-dev\") pod \"dc903065-1311-4d54-940e-5792987bfed5\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.725481 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-sys\") pod \"dc903065-1311-4d54-940e-5792987bfed5\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.725505 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-lib-modules\") pod \"dc903065-1311-4d54-940e-5792987bfed5\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.725563 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-sys" (OuterVolumeSpecName: "sys") pod "dc903065-1311-4d54-940e-5792987bfed5" (UID: "dc903065-1311-4d54-940e-5792987bfed5"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.725611 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "dc903065-1311-4d54-940e-5792987bfed5" (UID: "dc903065-1311-4d54-940e-5792987bfed5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.725618 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc903065-1311-4d54-940e-5792987bfed5-httpd-run\") pod \"dc903065-1311-4d54-940e-5792987bfed5\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.725699 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-run\") pod \"dc903065-1311-4d54-940e-5792987bfed5\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.725769 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvgk2\" (UniqueName: \"kubernetes.io/projected/dc903065-1311-4d54-940e-5792987bfed5-kube-api-access-hvgk2\") pod \"dc903065-1311-4d54-940e-5792987bfed5\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.725801 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"dc903065-1311-4d54-940e-5792987bfed5\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.725810 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-run" (OuterVolumeSpecName: "run") pod "dc903065-1311-4d54-940e-5792987bfed5" (UID: "dc903065-1311-4d54-940e-5792987bfed5"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.725876 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-var-locks-brick\") pod \"dc903065-1311-4d54-940e-5792987bfed5\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.725960 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"dc903065-1311-4d54-940e-5792987bfed5\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.726042 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "dc903065-1311-4d54-940e-5792987bfed5" (UID: "dc903065-1311-4d54-940e-5792987bfed5"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.726101 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc903065-1311-4d54-940e-5792987bfed5-logs\") pod \"dc903065-1311-4d54-940e-5792987bfed5\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.726154 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-etc-nvme\") pod \"dc903065-1311-4d54-940e-5792987bfed5\" (UID: \"dc903065-1311-4d54-940e-5792987bfed5\") " Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.726169 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc903065-1311-4d54-940e-5792987bfed5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "dc903065-1311-4d54-940e-5792987bfed5" (UID: "dc903065-1311-4d54-940e-5792987bfed5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.726216 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "dc903065-1311-4d54-940e-5792987bfed5" (UID: "dc903065-1311-4d54-940e-5792987bfed5"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.726659 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc903065-1311-4d54-940e-5792987bfed5-logs" (OuterVolumeSpecName: "logs") pod "dc903065-1311-4d54-940e-5792987bfed5" (UID: "dc903065-1311-4d54-940e-5792987bfed5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.726999 4766 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-var-locks-brick\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.727025 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc903065-1311-4d54-940e-5792987bfed5-logs\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.727039 4766 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-etc-nvme\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.727051 4766 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-etc-iscsi\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.727066 4766 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-dev\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.727077 4766 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-sys\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.727088 4766 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-lib-modules\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.727099 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc903065-1311-4d54-940e-5792987bfed5-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.727112 4766 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/dc903065-1311-4d54-940e-5792987bfed5-run\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.730178 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc903065-1311-4d54-940e-5792987bfed5-kube-api-access-hvgk2" (OuterVolumeSpecName: "kube-api-access-hvgk2") pod "dc903065-1311-4d54-940e-5792987bfed5" (UID: "dc903065-1311-4d54-940e-5792987bfed5"). InnerVolumeSpecName "kube-api-access-hvgk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.730655 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "dc903065-1311-4d54-940e-5792987bfed5" (UID: "dc903065-1311-4d54-940e-5792987bfed5"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.732327 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance-cache") pod "dc903065-1311-4d54-940e-5792987bfed5" (UID: "dc903065-1311-4d54-940e-5792987bfed5"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.732461 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc903065-1311-4d54-940e-5792987bfed5-scripts" (OuterVolumeSpecName: "scripts") pod "dc903065-1311-4d54-940e-5792987bfed5" (UID: "dc903065-1311-4d54-940e-5792987bfed5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.772749 4766 generic.go:334] "Generic (PLEG): container finished" podID="f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2" containerID="9d835e30fa9e83b24d4d78e344f830b2e42c079c50c08f928af941ca4cf425fb" exitCode=0 Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.772811 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2","Type":"ContainerDied","Data":"9d835e30fa9e83b24d4d78e344f830b2e42c079c50c08f928af941ca4cf425fb"} Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.772839 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2","Type":"ContainerDied","Data":"45a981aface99777b0fdc9da5fe60eb745bb07fa6c41bf0f8d5e8f17c7094257"} Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.772862 4766 scope.go:117] "RemoveContainer" containerID="9d835e30fa9e83b24d4d78e344f830b2e42c079c50c08f928af941ca4cf425fb" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.772987 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.775015 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancea79a-account-delete-9c5p9" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.775171 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc903065-1311-4d54-940e-5792987bfed5-config-data" (OuterVolumeSpecName: "config-data") pod "dc903065-1311-4d54-940e-5792987bfed5" (UID: "dc903065-1311-4d54-940e-5792987bfed5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.775205 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glancea79a-account-delete-9c5p9" event={"ID":"5ead4ca6-eb43-455f-869d-c333aa4c950d","Type":"ContainerDied","Data":"9a45a7d514ed30c264083c461f86a2b7bc8b88819ef71c7569be00efb1ced729"} Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.775224 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a45a7d514ed30c264083c461f86a2b7bc8b88819ef71c7569be00efb1ced729" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.777459 4766 generic.go:334] "Generic (PLEG): container finished" podID="dc903065-1311-4d54-940e-5792987bfed5" containerID="7346a1e30cd416ab05861573f6b507b6713b970f26fa7a836b02efa8ebb0b6a3" exitCode=0 Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.777486 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"dc903065-1311-4d54-940e-5792987bfed5","Type":"ContainerDied","Data":"7346a1e30cd416ab05861573f6b507b6713b970f26fa7a836b02efa8ebb0b6a3"} Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.777501 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"dc903065-1311-4d54-940e-5792987bfed5","Type":"ContainerDied","Data":"7e9944d4bd65a87d849782d871788430369ee463975417b2cf1cb16e4981f4cc"} Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.777558 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.795715 4766 scope.go:117] "RemoveContainer" containerID="d5b8c49b8d48d78f007247427e6a1fc34a6291ea5d5b70768744c688a55ea5c7" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.818690 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.823362 4766 scope.go:117] "RemoveContainer" containerID="9d835e30fa9e83b24d4d78e344f830b2e42c079c50c08f928af941ca4cf425fb" Dec 13 04:08:21 crc kubenswrapper[4766]: E1213 04:08:21.823886 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d835e30fa9e83b24d4d78e344f830b2e42c079c50c08f928af941ca4cf425fb\": container with ID starting with 9d835e30fa9e83b24d4d78e344f830b2e42c079c50c08f928af941ca4cf425fb not found: ID does not exist" containerID="9d835e30fa9e83b24d4d78e344f830b2e42c079c50c08f928af941ca4cf425fb" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.823932 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d835e30fa9e83b24d4d78e344f830b2e42c079c50c08f928af941ca4cf425fb"} err="failed to get container status \"9d835e30fa9e83b24d4d78e344f830b2e42c079c50c08f928af941ca4cf425fb\": rpc error: code = NotFound desc = could not find container \"9d835e30fa9e83b24d4d78e344f830b2e42c079c50c08f928af941ca4cf425fb\": container with ID starting with 9d835e30fa9e83b24d4d78e344f830b2e42c079c50c08f928af941ca4cf425fb not found: ID does not exist" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.823962 4766 scope.go:117] "RemoveContainer" containerID="d5b8c49b8d48d78f007247427e6a1fc34a6291ea5d5b70768744c688a55ea5c7" Dec 13 04:08:21 crc kubenswrapper[4766]: E1213 04:08:21.824291 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5b8c49b8d48d78f007247427e6a1fc34a6291ea5d5b70768744c688a55ea5c7\": container with ID starting with d5b8c49b8d48d78f007247427e6a1fc34a6291ea5d5b70768744c688a55ea5c7 not found: ID does not exist" containerID="d5b8c49b8d48d78f007247427e6a1fc34a6291ea5d5b70768744c688a55ea5c7" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.824332 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5b8c49b8d48d78f007247427e6a1fc34a6291ea5d5b70768744c688a55ea5c7"} err="failed to get container status \"d5b8c49b8d48d78f007247427e6a1fc34a6291ea5d5b70768744c688a55ea5c7\": rpc error: code = NotFound desc = could not find container \"d5b8c49b8d48d78f007247427e6a1fc34a6291ea5d5b70768744c688a55ea5c7\": container with ID starting with d5b8c49b8d48d78f007247427e6a1fc34a6291ea5d5b70768744c688a55ea5c7 not found: ID does not exist" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.824360 4766 scope.go:117] "RemoveContainer" containerID="7346a1e30cd416ab05861573f6b507b6713b970f26fa7a836b02efa8ebb0b6a3" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.825832 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.828173 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.828230 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-dev\") pod \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.828275 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.828333 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-etc-nvme\") pod \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.828361 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s97g\" (UniqueName: \"kubernetes.io/projected/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-kube-api-access-4s97g\") pod \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.828383 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-lib-modules\") pod \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.828402 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-etc-iscsi\") pod \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.828423 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-sys\") pod \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.828456 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-httpd-run\") pod \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.828483 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-scripts\") pod \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.828523 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-var-locks-brick\") pod \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.828544 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-logs\") pod \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.828569 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-run\") pod \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.828589 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-config-data\") pod \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\" (UID: \"f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2\") " Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.828854 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.828879 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc903065-1311-4d54-940e-5792987bfed5-config-data\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.828893 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc903065-1311-4d54-940e-5792987bfed5-scripts\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.828905 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvgk2\" (UniqueName: \"kubernetes.io/projected/dc903065-1311-4d54-940e-5792987bfed5-kube-api-access-hvgk2\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.828920 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.829535 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-sys" (OuterVolumeSpecName: "sys") pod "f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2" (UID: "f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.829736 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-dev" (OuterVolumeSpecName: "dev") pod "f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2" (UID: "f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.829948 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2" (UID: "f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.830189 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-logs" (OuterVolumeSpecName: "logs") pod "f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2" (UID: "f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.830527 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2" (UID: "f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.831068 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-run" (OuterVolumeSpecName: "run") pod "f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2" (UID: "f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.831103 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2" (UID: "f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.831146 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2" (UID: "f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.831743 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2" (UID: "f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.834587 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-scripts" (OuterVolumeSpecName: "scripts") pod "f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2" (UID: "f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.834620 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-kube-api-access-4s97g" (OuterVolumeSpecName: "kube-api-access-4s97g") pod "f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2" (UID: "f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2"). InnerVolumeSpecName "kube-api-access-4s97g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.835076 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2" (UID: "f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.835599 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance-cache") pod "f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2" (UID: "f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.844543 4766 scope.go:117] "RemoveContainer" containerID="5708dc6da2f745bdb2cf32445b3002dc13675f55530bc7e31c1feddc9c108e22" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.860048 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.860109 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.874490 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-config-data" (OuterVolumeSpecName: "config-data") pod "f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2" (UID: "f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.875710 4766 scope.go:117] "RemoveContainer" containerID="7346a1e30cd416ab05861573f6b507b6713b970f26fa7a836b02efa8ebb0b6a3" Dec 13 04:08:21 crc kubenswrapper[4766]: E1213 04:08:21.876151 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7346a1e30cd416ab05861573f6b507b6713b970f26fa7a836b02efa8ebb0b6a3\": container with ID starting with 7346a1e30cd416ab05861573f6b507b6713b970f26fa7a836b02efa8ebb0b6a3 not found: ID does not exist" containerID="7346a1e30cd416ab05861573f6b507b6713b970f26fa7a836b02efa8ebb0b6a3" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.876200 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7346a1e30cd416ab05861573f6b507b6713b970f26fa7a836b02efa8ebb0b6a3"} err="failed to get container status \"7346a1e30cd416ab05861573f6b507b6713b970f26fa7a836b02efa8ebb0b6a3\": rpc error: code = NotFound desc = could not find container \"7346a1e30cd416ab05861573f6b507b6713b970f26fa7a836b02efa8ebb0b6a3\": container with ID starting with 7346a1e30cd416ab05861573f6b507b6713b970f26fa7a836b02efa8ebb0b6a3 not found: ID does not exist" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.876234 4766 scope.go:117] "RemoveContainer" containerID="5708dc6da2f745bdb2cf32445b3002dc13675f55530bc7e31c1feddc9c108e22" Dec 13 04:08:21 crc kubenswrapper[4766]: E1213 04:08:21.876582 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5708dc6da2f745bdb2cf32445b3002dc13675f55530bc7e31c1feddc9c108e22\": container with ID starting with 5708dc6da2f745bdb2cf32445b3002dc13675f55530bc7e31c1feddc9c108e22 not found: ID does not exist" containerID="5708dc6da2f745bdb2cf32445b3002dc13675f55530bc7e31c1feddc9c108e22" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.876619 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5708dc6da2f745bdb2cf32445b3002dc13675f55530bc7e31c1feddc9c108e22"} err="failed to get container status \"5708dc6da2f745bdb2cf32445b3002dc13675f55530bc7e31c1feddc9c108e22\": rpc error: code = NotFound desc = could not find container \"5708dc6da2f745bdb2cf32445b3002dc13675f55530bc7e31c1feddc9c108e22\": container with ID starting with 5708dc6da2f745bdb2cf32445b3002dc13675f55530bc7e31c1feddc9c108e22 not found: ID does not exist" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.929994 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-logs\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.930042 4766 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-run\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.930052 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-config-data\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.930065 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.930102 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.930115 4766 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-dev\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.930134 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.930147 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.930157 4766 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-etc-nvme\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.930169 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s97g\" (UniqueName: \"kubernetes.io/projected/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-kube-api-access-4s97g\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.930185 4766 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-lib-modules\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.930197 4766 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-etc-iscsi\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.930207 4766 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-sys\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.930217 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.930228 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-scripts\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.930239 4766 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2-var-locks-brick\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.944421 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 13 04:08:21 crc kubenswrapper[4766]: I1213 04:08:21.944486 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Dec 13 04:08:22 crc kubenswrapper[4766]: I1213 04:08:22.032490 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:22 crc kubenswrapper[4766]: I1213 04:08:22.032539 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:22 crc kubenswrapper[4766]: I1213 04:08:22.113286 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Dec 13 04:08:22 crc kubenswrapper[4766]: I1213 04:08:22.121005 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Dec 13 04:08:23 crc kubenswrapper[4766]: I1213 04:08:23.084597 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-create-ztsg8"] Dec 13 04:08:23 crc kubenswrapper[4766]: I1213 04:08:23.090863 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-create-ztsg8"] Dec 13 04:08:23 crc kubenswrapper[4766]: I1213 04:08:23.096450 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-a79a-account-create-ndmtj"] Dec 13 04:08:23 crc kubenswrapper[4766]: I1213 04:08:23.107770 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glancea79a-account-delete-9c5p9"] Dec 13 04:08:23 crc kubenswrapper[4766]: I1213 04:08:23.117155 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-a79a-account-create-ndmtj"] Dec 13 04:08:23 crc kubenswrapper[4766]: I1213 04:08:23.122916 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glancea79a-account-delete-9c5p9"] Dec 13 04:08:23 crc kubenswrapper[4766]: I1213 04:08:23.289993 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-create-rv7lj"] Dec 13 04:08:23 crc kubenswrapper[4766]: E1213 04:08:23.290508 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc903065-1311-4d54-940e-5792987bfed5" containerName="glance-httpd" Dec 13 04:08:23 crc kubenswrapper[4766]: I1213 04:08:23.290531 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc903065-1311-4d54-940e-5792987bfed5" containerName="glance-httpd" Dec 13 04:08:23 crc kubenswrapper[4766]: E1213 04:08:23.290572 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2" containerName="glance-httpd" Dec 13 04:08:23 crc kubenswrapper[4766]: I1213 04:08:23.290579 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2" containerName="glance-httpd" Dec 13 04:08:23 crc kubenswrapper[4766]: E1213 04:08:23.290596 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ead4ca6-eb43-455f-869d-c333aa4c950d" containerName="mariadb-account-delete" Dec 13 04:08:23 crc kubenswrapper[4766]: I1213 04:08:23.290602 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ead4ca6-eb43-455f-869d-c333aa4c950d" containerName="mariadb-account-delete" Dec 13 04:08:23 crc kubenswrapper[4766]: E1213 04:08:23.290648 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2" containerName="glance-log" Dec 13 04:08:23 crc kubenswrapper[4766]: I1213 04:08:23.290659 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2" containerName="glance-log" Dec 13 04:08:23 crc kubenswrapper[4766]: E1213 04:08:23.290683 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2b528cd-5920-43d4-a9b9-11a396597de6" containerName="openstackclient" Dec 13 04:08:23 crc kubenswrapper[4766]: I1213 04:08:23.290690 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2b528cd-5920-43d4-a9b9-11a396597de6" containerName="openstackclient" Dec 13 04:08:23 crc kubenswrapper[4766]: E1213 04:08:23.290740 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc903065-1311-4d54-940e-5792987bfed5" containerName="glance-log" Dec 13 04:08:23 crc kubenswrapper[4766]: I1213 04:08:23.290749 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc903065-1311-4d54-940e-5792987bfed5" containerName="glance-log" Dec 13 04:08:23 crc kubenswrapper[4766]: I1213 04:08:23.290951 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2b528cd-5920-43d4-a9b9-11a396597de6" containerName="openstackclient" Dec 13 04:08:23 crc kubenswrapper[4766]: I1213 04:08:23.290966 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc903065-1311-4d54-940e-5792987bfed5" containerName="glance-httpd" Dec 13 04:08:23 crc kubenswrapper[4766]: I1213 04:08:23.290977 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc903065-1311-4d54-940e-5792987bfed5" containerName="glance-log" Dec 13 04:08:23 crc kubenswrapper[4766]: I1213 04:08:23.290987 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2" containerName="glance-httpd" Dec 13 04:08:23 crc kubenswrapper[4766]: I1213 04:08:23.290994 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ead4ca6-eb43-455f-869d-c333aa4c950d" containerName="mariadb-account-delete" Dec 13 04:08:23 crc kubenswrapper[4766]: I1213 04:08:23.291007 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2" containerName="glance-log" Dec 13 04:08:23 crc kubenswrapper[4766]: I1213 04:08:23.291721 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-rv7lj" Dec 13 04:08:23 crc kubenswrapper[4766]: I1213 04:08:23.295998 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-rv7lj"] Dec 13 04:08:23 crc kubenswrapper[4766]: I1213 04:08:23.320693 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjvzd\" (UniqueName: \"kubernetes.io/projected/fa4020ff-9e3c-46c8-bea4-3b8ab951e6c6-kube-api-access-cjvzd\") pod \"glance-db-create-rv7lj\" (UID: \"fa4020ff-9e3c-46c8-bea4-3b8ab951e6c6\") " pod="glance-kuttl-tests/glance-db-create-rv7lj" Dec 13 04:08:23 crc kubenswrapper[4766]: I1213 04:08:23.422085 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjvzd\" (UniqueName: \"kubernetes.io/projected/fa4020ff-9e3c-46c8-bea4-3b8ab951e6c6-kube-api-access-cjvzd\") pod \"glance-db-create-rv7lj\" (UID: \"fa4020ff-9e3c-46c8-bea4-3b8ab951e6c6\") " pod="glance-kuttl-tests/glance-db-create-rv7lj" Dec 13 04:08:23 crc kubenswrapper[4766]: I1213 04:08:23.452817 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjvzd\" (UniqueName: \"kubernetes.io/projected/fa4020ff-9e3c-46c8-bea4-3b8ab951e6c6-kube-api-access-cjvzd\") pod \"glance-db-create-rv7lj\" (UID: \"fa4020ff-9e3c-46c8-bea4-3b8ab951e6c6\") " pod="glance-kuttl-tests/glance-db-create-rv7lj" Dec 13 04:08:23 crc kubenswrapper[4766]: I1213 04:08:23.626279 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-rv7lj" Dec 13 04:08:23 crc kubenswrapper[4766]: I1213 04:08:23.628805 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="442b9064-0df6-4211-92fb-fca28828c3a1" path="/var/lib/kubelet/pods/442b9064-0df6-4211-92fb-fca28828c3a1/volumes" Dec 13 04:08:23 crc kubenswrapper[4766]: I1213 04:08:23.629982 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ead4ca6-eb43-455f-869d-c333aa4c950d" path="/var/lib/kubelet/pods/5ead4ca6-eb43-455f-869d-c333aa4c950d/volumes" Dec 13 04:08:23 crc kubenswrapper[4766]: I1213 04:08:23.630790 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="862dcb98-e52e-4e7e-a77d-ecf08b87fee1" path="/var/lib/kubelet/pods/862dcb98-e52e-4e7e-a77d-ecf08b87fee1/volumes" Dec 13 04:08:23 crc kubenswrapper[4766]: I1213 04:08:23.631702 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc903065-1311-4d54-940e-5792987bfed5" path="/var/lib/kubelet/pods/dc903065-1311-4d54-940e-5792987bfed5/volumes" Dec 13 04:08:23 crc kubenswrapper[4766]: I1213 04:08:23.633564 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2" path="/var/lib/kubelet/pods/f45b4f61-69d9-4cc0-b6e4-f6ddf6e8e7e2/volumes" Dec 13 04:08:24 crc kubenswrapper[4766]: I1213 04:08:24.141854 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-rv7lj"] Dec 13 04:08:25 crc kubenswrapper[4766]: I1213 04:08:25.114720 4766 generic.go:334] "Generic (PLEG): container finished" podID="fa4020ff-9e3c-46c8-bea4-3b8ab951e6c6" containerID="cbe4177e5e7f133cb6b0f7fe441435ca3e3e95dd26f975f3e886d3e0244be004" exitCode=0 Dec 13 04:08:25 crc kubenswrapper[4766]: I1213 04:08:25.114794 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-rv7lj" event={"ID":"fa4020ff-9e3c-46c8-bea4-3b8ab951e6c6","Type":"ContainerDied","Data":"cbe4177e5e7f133cb6b0f7fe441435ca3e3e95dd26f975f3e886d3e0244be004"} Dec 13 04:08:25 crc kubenswrapper[4766]: I1213 04:08:25.115095 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-rv7lj" event={"ID":"fa4020ff-9e3c-46c8-bea4-3b8ab951e6c6","Type":"ContainerStarted","Data":"f13cd3516b999b08ea4cf1e94f32bf7ac570ac3127a52b03bdbcebb323b3fb51"} Dec 13 04:08:26 crc kubenswrapper[4766]: I1213 04:08:26.548536 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-rv7lj" Dec 13 04:08:26 crc kubenswrapper[4766]: I1213 04:08:26.726104 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjvzd\" (UniqueName: \"kubernetes.io/projected/fa4020ff-9e3c-46c8-bea4-3b8ab951e6c6-kube-api-access-cjvzd\") pod \"fa4020ff-9e3c-46c8-bea4-3b8ab951e6c6\" (UID: \"fa4020ff-9e3c-46c8-bea4-3b8ab951e6c6\") " Dec 13 04:08:26 crc kubenswrapper[4766]: I1213 04:08:26.732191 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa4020ff-9e3c-46c8-bea4-3b8ab951e6c6-kube-api-access-cjvzd" (OuterVolumeSpecName: "kube-api-access-cjvzd") pod "fa4020ff-9e3c-46c8-bea4-3b8ab951e6c6" (UID: "fa4020ff-9e3c-46c8-bea4-3b8ab951e6c6"). InnerVolumeSpecName "kube-api-access-cjvzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:08:26 crc kubenswrapper[4766]: I1213 04:08:26.828138 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjvzd\" (UniqueName: \"kubernetes.io/projected/fa4020ff-9e3c-46c8-bea4-3b8ab951e6c6-kube-api-access-cjvzd\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:27 crc kubenswrapper[4766]: I1213 04:08:27.138735 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-rv7lj" event={"ID":"fa4020ff-9e3c-46c8-bea4-3b8ab951e6c6","Type":"ContainerDied","Data":"f13cd3516b999b08ea4cf1e94f32bf7ac570ac3127a52b03bdbcebb323b3fb51"} Dec 13 04:08:27 crc kubenswrapper[4766]: I1213 04:08:27.139227 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f13cd3516b999b08ea4cf1e94f32bf7ac570ac3127a52b03bdbcebb323b3fb51" Dec 13 04:08:27 crc kubenswrapper[4766]: I1213 04:08:27.138789 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-rv7lj" Dec 13 04:08:33 crc kubenswrapper[4766]: I1213 04:08:33.329645 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-464e-account-create-tswck"] Dec 13 04:08:33 crc kubenswrapper[4766]: E1213 04:08:33.332159 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa4020ff-9e3c-46c8-bea4-3b8ab951e6c6" containerName="mariadb-database-create" Dec 13 04:08:33 crc kubenswrapper[4766]: I1213 04:08:33.332441 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa4020ff-9e3c-46c8-bea4-3b8ab951e6c6" containerName="mariadb-database-create" Dec 13 04:08:33 crc kubenswrapper[4766]: I1213 04:08:33.332871 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa4020ff-9e3c-46c8-bea4-3b8ab951e6c6" containerName="mariadb-database-create" Dec 13 04:08:33 crc kubenswrapper[4766]: I1213 04:08:33.334110 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-464e-account-create-tswck" Dec 13 04:08:33 crc kubenswrapper[4766]: I1213 04:08:33.338398 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-db-secret" Dec 13 04:08:33 crc kubenswrapper[4766]: I1213 04:08:33.338782 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-464e-account-create-tswck"] Dec 13 04:08:33 crc kubenswrapper[4766]: I1213 04:08:33.475887 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz2x2\" (UniqueName: \"kubernetes.io/projected/153113c7-f43b-44c2-91a1-a77f8b1e4def-kube-api-access-sz2x2\") pod \"glance-464e-account-create-tswck\" (UID: \"153113c7-f43b-44c2-91a1-a77f8b1e4def\") " pod="glance-kuttl-tests/glance-464e-account-create-tswck" Dec 13 04:08:33 crc kubenswrapper[4766]: I1213 04:08:33.577235 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz2x2\" (UniqueName: \"kubernetes.io/projected/153113c7-f43b-44c2-91a1-a77f8b1e4def-kube-api-access-sz2x2\") pod \"glance-464e-account-create-tswck\" (UID: \"153113c7-f43b-44c2-91a1-a77f8b1e4def\") " pod="glance-kuttl-tests/glance-464e-account-create-tswck" Dec 13 04:08:33 crc kubenswrapper[4766]: I1213 04:08:33.599511 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz2x2\" (UniqueName: \"kubernetes.io/projected/153113c7-f43b-44c2-91a1-a77f8b1e4def-kube-api-access-sz2x2\") pod \"glance-464e-account-create-tswck\" (UID: \"153113c7-f43b-44c2-91a1-a77f8b1e4def\") " pod="glance-kuttl-tests/glance-464e-account-create-tswck" Dec 13 04:08:33 crc kubenswrapper[4766]: I1213 04:08:33.656335 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-464e-account-create-tswck" Dec 13 04:08:34 crc kubenswrapper[4766]: I1213 04:08:34.085481 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-464e-account-create-tswck"] Dec 13 04:08:34 crc kubenswrapper[4766]: I1213 04:08:34.202109 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-464e-account-create-tswck" event={"ID":"153113c7-f43b-44c2-91a1-a77f8b1e4def","Type":"ContainerStarted","Data":"17efee9dd6c7a43665dce443dfcde6dc4cac407e9f46cd339e136d44f8aa0df0"} Dec 13 04:08:35 crc kubenswrapper[4766]: I1213 04:08:35.227126 4766 generic.go:334] "Generic (PLEG): container finished" podID="153113c7-f43b-44c2-91a1-a77f8b1e4def" containerID="e93332c99f7a79af52ced1e12631e08ce73c8884516fc57749abbefa5be4ca88" exitCode=0 Dec 13 04:08:35 crc kubenswrapper[4766]: I1213 04:08:35.227083 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-464e-account-create-tswck" event={"ID":"153113c7-f43b-44c2-91a1-a77f8b1e4def","Type":"ContainerDied","Data":"e93332c99f7a79af52ced1e12631e08ce73c8884516fc57749abbefa5be4ca88"} Dec 13 04:08:36 crc kubenswrapper[4766]: I1213 04:08:36.551375 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-464e-account-create-tswck" Dec 13 04:08:36 crc kubenswrapper[4766]: I1213 04:08:36.729086 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz2x2\" (UniqueName: \"kubernetes.io/projected/153113c7-f43b-44c2-91a1-a77f8b1e4def-kube-api-access-sz2x2\") pod \"153113c7-f43b-44c2-91a1-a77f8b1e4def\" (UID: \"153113c7-f43b-44c2-91a1-a77f8b1e4def\") " Dec 13 04:08:36 crc kubenswrapper[4766]: I1213 04:08:36.735823 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/153113c7-f43b-44c2-91a1-a77f8b1e4def-kube-api-access-sz2x2" (OuterVolumeSpecName: "kube-api-access-sz2x2") pod "153113c7-f43b-44c2-91a1-a77f8b1e4def" (UID: "153113c7-f43b-44c2-91a1-a77f8b1e4def"). InnerVolumeSpecName "kube-api-access-sz2x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:08:36 crc kubenswrapper[4766]: I1213 04:08:36.831280 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz2x2\" (UniqueName: \"kubernetes.io/projected/153113c7-f43b-44c2-91a1-a77f8b1e4def-kube-api-access-sz2x2\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:37 crc kubenswrapper[4766]: I1213 04:08:37.249046 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-464e-account-create-tswck" event={"ID":"153113c7-f43b-44c2-91a1-a77f8b1e4def","Type":"ContainerDied","Data":"17efee9dd6c7a43665dce443dfcde6dc4cac407e9f46cd339e136d44f8aa0df0"} Dec 13 04:08:37 crc kubenswrapper[4766]: I1213 04:08:37.249142 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-464e-account-create-tswck" Dec 13 04:08:37 crc kubenswrapper[4766]: I1213 04:08:37.249179 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17efee9dd6c7a43665dce443dfcde6dc4cac407e9f46cd339e136d44f8aa0df0" Dec 13 04:08:38 crc kubenswrapper[4766]: I1213 04:08:38.457349 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-sync-vzt9z"] Dec 13 04:08:38 crc kubenswrapper[4766]: E1213 04:08:38.458145 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="153113c7-f43b-44c2-91a1-a77f8b1e4def" containerName="mariadb-account-create" Dec 13 04:08:38 crc kubenswrapper[4766]: I1213 04:08:38.458163 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="153113c7-f43b-44c2-91a1-a77f8b1e4def" containerName="mariadb-account-create" Dec 13 04:08:38 crc kubenswrapper[4766]: I1213 04:08:38.458365 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="153113c7-f43b-44c2-91a1-a77f8b1e4def" containerName="mariadb-account-create" Dec 13 04:08:38 crc kubenswrapper[4766]: I1213 04:08:38.459120 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-vzt9z" Dec 13 04:08:38 crc kubenswrapper[4766]: I1213 04:08:38.461161 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-lxpch" Dec 13 04:08:38 crc kubenswrapper[4766]: I1213 04:08:38.462741 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-config-data" Dec 13 04:08:38 crc kubenswrapper[4766]: I1213 04:08:38.468212 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-vzt9z"] Dec 13 04:08:38 crc kubenswrapper[4766]: I1213 04:08:38.559710 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e17456b9-737f-498f-bfe9-97257ae7ae6d-config-data\") pod \"glance-db-sync-vzt9z\" (UID: \"e17456b9-737f-498f-bfe9-97257ae7ae6d\") " pod="glance-kuttl-tests/glance-db-sync-vzt9z" Dec 13 04:08:38 crc kubenswrapper[4766]: I1213 04:08:38.559813 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59vq2\" (UniqueName: \"kubernetes.io/projected/e17456b9-737f-498f-bfe9-97257ae7ae6d-kube-api-access-59vq2\") pod \"glance-db-sync-vzt9z\" (UID: \"e17456b9-737f-498f-bfe9-97257ae7ae6d\") " pod="glance-kuttl-tests/glance-db-sync-vzt9z" Dec 13 04:08:38 crc kubenswrapper[4766]: I1213 04:08:38.559877 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e17456b9-737f-498f-bfe9-97257ae7ae6d-db-sync-config-data\") pod \"glance-db-sync-vzt9z\" (UID: \"e17456b9-737f-498f-bfe9-97257ae7ae6d\") " pod="glance-kuttl-tests/glance-db-sync-vzt9z" Dec 13 04:08:38 crc kubenswrapper[4766]: I1213 04:08:38.661800 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59vq2\" (UniqueName: \"kubernetes.io/projected/e17456b9-737f-498f-bfe9-97257ae7ae6d-kube-api-access-59vq2\") pod \"glance-db-sync-vzt9z\" (UID: \"e17456b9-737f-498f-bfe9-97257ae7ae6d\") " pod="glance-kuttl-tests/glance-db-sync-vzt9z" Dec 13 04:08:38 crc kubenswrapper[4766]: I1213 04:08:38.662556 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e17456b9-737f-498f-bfe9-97257ae7ae6d-db-sync-config-data\") pod \"glance-db-sync-vzt9z\" (UID: \"e17456b9-737f-498f-bfe9-97257ae7ae6d\") " pod="glance-kuttl-tests/glance-db-sync-vzt9z" Dec 13 04:08:38 crc kubenswrapper[4766]: I1213 04:08:38.662829 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e17456b9-737f-498f-bfe9-97257ae7ae6d-config-data\") pod \"glance-db-sync-vzt9z\" (UID: \"e17456b9-737f-498f-bfe9-97257ae7ae6d\") " pod="glance-kuttl-tests/glance-db-sync-vzt9z" Dec 13 04:08:38 crc kubenswrapper[4766]: I1213 04:08:38.667558 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e17456b9-737f-498f-bfe9-97257ae7ae6d-db-sync-config-data\") pod \"glance-db-sync-vzt9z\" (UID: \"e17456b9-737f-498f-bfe9-97257ae7ae6d\") " pod="glance-kuttl-tests/glance-db-sync-vzt9z" Dec 13 04:08:38 crc kubenswrapper[4766]: I1213 04:08:38.673235 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e17456b9-737f-498f-bfe9-97257ae7ae6d-config-data\") pod \"glance-db-sync-vzt9z\" (UID: \"e17456b9-737f-498f-bfe9-97257ae7ae6d\") " pod="glance-kuttl-tests/glance-db-sync-vzt9z" Dec 13 04:08:38 crc kubenswrapper[4766]: I1213 04:08:38.681148 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59vq2\" (UniqueName: \"kubernetes.io/projected/e17456b9-737f-498f-bfe9-97257ae7ae6d-kube-api-access-59vq2\") pod \"glance-db-sync-vzt9z\" (UID: \"e17456b9-737f-498f-bfe9-97257ae7ae6d\") " pod="glance-kuttl-tests/glance-db-sync-vzt9z" Dec 13 04:08:38 crc kubenswrapper[4766]: I1213 04:08:38.891309 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-vzt9z" Dec 13 04:08:40 crc kubenswrapper[4766]: I1213 04:08:39.732449 4766 patch_prober.go:28] interesting pod/machine-config-daemon-94w9l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 04:08:40 crc kubenswrapper[4766]: I1213 04:08:39.732873 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 04:08:40 crc kubenswrapper[4766]: I1213 04:08:40.334299 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-vzt9z"] Dec 13 04:08:41 crc kubenswrapper[4766]: I1213 04:08:41.288367 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-vzt9z" event={"ID":"e17456b9-737f-498f-bfe9-97257ae7ae6d","Type":"ContainerStarted","Data":"075e9291eb8d7dad8d7131f39c11abb1410af9a5fb91a0e69d03b3ce4f8ab67b"} Dec 13 04:08:41 crc kubenswrapper[4766]: I1213 04:08:41.289005 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-vzt9z" event={"ID":"e17456b9-737f-498f-bfe9-97257ae7ae6d","Type":"ContainerStarted","Data":"fb75f30e8ff179a786e19bf64f7854247a5844b495fd9346aa8630468b33f622"} Dec 13 04:08:41 crc kubenswrapper[4766]: I1213 04:08:41.311455 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-sync-vzt9z" podStartSLOduration=3.3114148930000002 podStartE2EDuration="3.311414893s" podCreationTimestamp="2025-12-13 04:08:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 04:08:41.307676054 +0000 UTC m=+1452.817609028" watchObservedRunningTime="2025-12-13 04:08:41.311414893 +0000 UTC m=+1452.821347857" Dec 13 04:08:44 crc kubenswrapper[4766]: I1213 04:08:44.319916 4766 generic.go:334] "Generic (PLEG): container finished" podID="e17456b9-737f-498f-bfe9-97257ae7ae6d" containerID="075e9291eb8d7dad8d7131f39c11abb1410af9a5fb91a0e69d03b3ce4f8ab67b" exitCode=0 Dec 13 04:08:44 crc kubenswrapper[4766]: I1213 04:08:44.320451 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-vzt9z" event={"ID":"e17456b9-737f-498f-bfe9-97257ae7ae6d","Type":"ContainerDied","Data":"075e9291eb8d7dad8d7131f39c11abb1410af9a5fb91a0e69d03b3ce4f8ab67b"} Dec 13 04:08:45 crc kubenswrapper[4766]: I1213 04:08:45.622008 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-vzt9z" Dec 13 04:08:45 crc kubenswrapper[4766]: I1213 04:08:45.776349 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e17456b9-737f-498f-bfe9-97257ae7ae6d-config-data\") pod \"e17456b9-737f-498f-bfe9-97257ae7ae6d\" (UID: \"e17456b9-737f-498f-bfe9-97257ae7ae6d\") " Dec 13 04:08:45 crc kubenswrapper[4766]: I1213 04:08:45.776565 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59vq2\" (UniqueName: \"kubernetes.io/projected/e17456b9-737f-498f-bfe9-97257ae7ae6d-kube-api-access-59vq2\") pod \"e17456b9-737f-498f-bfe9-97257ae7ae6d\" (UID: \"e17456b9-737f-498f-bfe9-97257ae7ae6d\") " Dec 13 04:08:45 crc kubenswrapper[4766]: I1213 04:08:45.776601 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e17456b9-737f-498f-bfe9-97257ae7ae6d-db-sync-config-data\") pod \"e17456b9-737f-498f-bfe9-97257ae7ae6d\" (UID: \"e17456b9-737f-498f-bfe9-97257ae7ae6d\") " Dec 13 04:08:45 crc kubenswrapper[4766]: I1213 04:08:45.784587 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e17456b9-737f-498f-bfe9-97257ae7ae6d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e17456b9-737f-498f-bfe9-97257ae7ae6d" (UID: "e17456b9-737f-498f-bfe9-97257ae7ae6d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:08:45 crc kubenswrapper[4766]: I1213 04:08:45.785591 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e17456b9-737f-498f-bfe9-97257ae7ae6d-kube-api-access-59vq2" (OuterVolumeSpecName: "kube-api-access-59vq2") pod "e17456b9-737f-498f-bfe9-97257ae7ae6d" (UID: "e17456b9-737f-498f-bfe9-97257ae7ae6d"). InnerVolumeSpecName "kube-api-access-59vq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:08:45 crc kubenswrapper[4766]: I1213 04:08:45.832649 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e17456b9-737f-498f-bfe9-97257ae7ae6d-config-data" (OuterVolumeSpecName: "config-data") pod "e17456b9-737f-498f-bfe9-97257ae7ae6d" (UID: "e17456b9-737f-498f-bfe9-97257ae7ae6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:08:45 crc kubenswrapper[4766]: I1213 04:08:45.878710 4766 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e17456b9-737f-498f-bfe9-97257ae7ae6d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:45 crc kubenswrapper[4766]: I1213 04:08:45.878747 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e17456b9-737f-498f-bfe9-97257ae7ae6d-config-data\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:45 crc kubenswrapper[4766]: I1213 04:08:45.878759 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59vq2\" (UniqueName: \"kubernetes.io/projected/e17456b9-737f-498f-bfe9-97257ae7ae6d-kube-api-access-59vq2\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:46 crc kubenswrapper[4766]: I1213 04:08:46.336507 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-vzt9z" event={"ID":"e17456b9-737f-498f-bfe9-97257ae7ae6d","Type":"ContainerDied","Data":"fb75f30e8ff179a786e19bf64f7854247a5844b495fd9346aa8630468b33f622"} Dec 13 04:08:46 crc kubenswrapper[4766]: I1213 04:08:46.336557 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb75f30e8ff179a786e19bf64f7854247a5844b495fd9346aa8630468b33f622" Dec 13 04:08:46 crc kubenswrapper[4766]: I1213 04:08:46.336554 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-vzt9z" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.431221 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Dec 13 04:08:47 crc kubenswrapper[4766]: E1213 04:08:47.431967 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e17456b9-737f-498f-bfe9-97257ae7ae6d" containerName="glance-db-sync" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.431984 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e17456b9-737f-498f-bfe9-97257ae7ae6d" containerName="glance-db-sync" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.432193 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e17456b9-737f-498f-bfe9-97257ae7ae6d" containerName="glance-db-sync" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.433218 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.436738 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-external-config-data" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.436791 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-scripts" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.437248 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-lxpch" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.473536 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.605989 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/920897da-f7c3-455b-b8a9-1491e8543ed5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.606049 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pqbn\" (UniqueName: \"kubernetes.io/projected/920897da-f7c3-455b-b8a9-1491e8543ed5-kube-api-access-8pqbn\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.606082 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/920897da-f7c3-455b-b8a9-1491e8543ed5-dev\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.606107 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/920897da-f7c3-455b-b8a9-1491e8543ed5-run\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.606180 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/920897da-f7c3-455b-b8a9-1491e8543ed5-scripts\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.606210 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/920897da-f7c3-455b-b8a9-1491e8543ed5-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.606229 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.606267 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/920897da-f7c3-455b-b8a9-1491e8543ed5-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.606285 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/920897da-f7c3-455b-b8a9-1491e8543ed5-config-data\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.606300 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/920897da-f7c3-455b-b8a9-1491e8543ed5-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.606323 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/920897da-f7c3-455b-b8a9-1491e8543ed5-logs\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.606347 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/920897da-f7c3-455b-b8a9-1491e8543ed5-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.606375 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/920897da-f7c3-455b-b8a9-1491e8543ed5-sys\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.606407 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.707763 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/920897da-f7c3-455b-b8a9-1491e8543ed5-logs\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.707833 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/920897da-f7c3-455b-b8a9-1491e8543ed5-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.707869 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/920897da-f7c3-455b-b8a9-1491e8543ed5-sys\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.707905 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.707927 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/920897da-f7c3-455b-b8a9-1491e8543ed5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.707951 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pqbn\" (UniqueName: \"kubernetes.io/projected/920897da-f7c3-455b-b8a9-1491e8543ed5-kube-api-access-8pqbn\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.707975 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/920897da-f7c3-455b-b8a9-1491e8543ed5-dev\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.708000 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/920897da-f7c3-455b-b8a9-1491e8543ed5-run\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.708018 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/920897da-f7c3-455b-b8a9-1491e8543ed5-scripts\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.708030 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/920897da-f7c3-455b-b8a9-1491e8543ed5-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.708043 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/920897da-f7c3-455b-b8a9-1491e8543ed5-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.708112 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.708186 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/920897da-f7c3-455b-b8a9-1491e8543ed5-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.708211 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/920897da-f7c3-455b-b8a9-1491e8543ed5-config-data\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.708227 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/920897da-f7c3-455b-b8a9-1491e8543ed5-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.708313 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") device mount path \"/mnt/openstack/pv01\"" pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.708568 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/920897da-f7c3-455b-b8a9-1491e8543ed5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.708611 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/920897da-f7c3-455b-b8a9-1491e8543ed5-run\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.708662 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/920897da-f7c3-455b-b8a9-1491e8543ed5-logs\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.708078 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/920897da-f7c3-455b-b8a9-1491e8543ed5-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.708800 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/920897da-f7c3-455b-b8a9-1491e8543ed5-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.708822 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/920897da-f7c3-455b-b8a9-1491e8543ed5-dev\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.708838 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/920897da-f7c3-455b-b8a9-1491e8543ed5-sys\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.708875 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/920897da-f7c3-455b-b8a9-1491e8543ed5-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.708938 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") device mount path \"/mnt/openstack/pv03\"" pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.714237 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/920897da-f7c3-455b-b8a9-1491e8543ed5-scripts\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.725992 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/920897da-f7c3-455b-b8a9-1491e8543ed5-config-data\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.729565 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pqbn\" (UniqueName: \"kubernetes.io/projected/920897da-f7c3-455b-b8a9-1491e8543ed5-kube-api-access-8pqbn\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.730316 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.731255 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"920897da-f7c3-455b-b8a9-1491e8543ed5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.756742 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.783807 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.787487 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.796291 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-internal-config-data" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.811051 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.930184 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.930237 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-dev\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.930277 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.930306 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-sys\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.930337 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/332ed8b7-609a-4526-9e2a-e09b1723752e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.930372 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.930396 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/332ed8b7-609a-4526-9e2a-e09b1723752e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.930448 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-run\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.930476 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.930505 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.930537 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/332ed8b7-609a-4526-9e2a-e09b1723752e-logs\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.930576 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qflq\" (UniqueName: \"kubernetes.io/projected/332ed8b7-609a-4526-9e2a-e09b1723752e-kube-api-access-4qflq\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.930620 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:47 crc kubenswrapper[4766]: I1213 04:08:47.930645 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/332ed8b7-609a-4526-9e2a-e09b1723752e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.031703 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.031773 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.031809 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/332ed8b7-609a-4526-9e2a-e09b1723752e-logs\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.031861 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qflq\" (UniqueName: \"kubernetes.io/projected/332ed8b7-609a-4526-9e2a-e09b1723752e-kube-api-access-4qflq\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.031870 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.031897 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.031956 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/332ed8b7-609a-4526-9e2a-e09b1723752e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.032009 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.032032 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-dev\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.032084 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.032115 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-sys\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.032165 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/332ed8b7-609a-4526-9e2a-e09b1723752e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.032193 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.032210 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/332ed8b7-609a-4526-9e2a-e09b1723752e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.032249 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-run\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.032348 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-run\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.032653 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.032791 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") device mount path \"/mnt/openstack/pv11\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.032805 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.032947 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") device mount path \"/mnt/openstack/pv12\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.033395 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.033520 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-dev\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.033982 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/332ed8b7-609a-4526-9e2a-e09b1723752e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.034919 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-sys\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.035226 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/332ed8b7-609a-4526-9e2a-e09b1723752e-logs\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.038310 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/332ed8b7-609a-4526-9e2a-e09b1723752e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.039698 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/332ed8b7-609a-4526-9e2a-e09b1723752e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.053456 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qflq\" (UniqueName: \"kubernetes.io/projected/332ed8b7-609a-4526-9e2a-e09b1723752e-kube-api-access-4qflq\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.055826 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.059355 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.152590 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.240737 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.367952 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"920897da-f7c3-455b-b8a9-1491e8543ed5","Type":"ContainerStarted","Data":"97f031ad143dd578a052d5388f8e56ce751afb59cd5e1654e44903a62e9334a9"} Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.411715 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Dec 13 04:08:48 crc kubenswrapper[4766]: I1213 04:08:48.605269 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Dec 13 04:08:49 crc kubenswrapper[4766]: I1213 04:08:49.380852 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"920897da-f7c3-455b-b8a9-1491e8543ed5","Type":"ContainerStarted","Data":"c63885ad87d7a7b97a2c8c910f217575811d963e277938c7e8869180201cf8db"} Dec 13 04:08:49 crc kubenswrapper[4766]: I1213 04:08:49.381531 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"920897da-f7c3-455b-b8a9-1491e8543ed5","Type":"ContainerStarted","Data":"7aa45c8b866922ceed6acf9bfb21c806ab08862569431730d8ae984719d5aa80"} Dec 13 04:08:49 crc kubenswrapper[4766]: I1213 04:08:49.381547 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"920897da-f7c3-455b-b8a9-1491e8543ed5","Type":"ContainerStarted","Data":"0f9dbf8e06ff42e156b3d9ec38ebb145e2b52f4f29115d99e9107d4d4b1bb7eb"} Dec 13 04:08:49 crc kubenswrapper[4766]: I1213 04:08:49.383783 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"332ed8b7-609a-4526-9e2a-e09b1723752e","Type":"ContainerStarted","Data":"7bd6f27dda8e4119fe954f4e46fdff80bcbc56fd3c49f39878fed94bbd0c795e"} Dec 13 04:08:49 crc kubenswrapper[4766]: I1213 04:08:49.383810 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"332ed8b7-609a-4526-9e2a-e09b1723752e","Type":"ContainerStarted","Data":"693cd6b6b8e0fb7da88161bbb5732bee241a73d4b57bf54e686e4bdb0471d2cd"} Dec 13 04:08:49 crc kubenswrapper[4766]: I1213 04:08:49.383821 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"332ed8b7-609a-4526-9e2a-e09b1723752e","Type":"ContainerStarted","Data":"704acdcc4939392621fca589957d11a121e4ef101810e8cfc902a1a3f5a61ad4"} Dec 13 04:08:49 crc kubenswrapper[4766]: I1213 04:08:49.383832 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"332ed8b7-609a-4526-9e2a-e09b1723752e","Type":"ContainerStarted","Data":"a54ac2f5ed46b42cddfa95ffa8f3f22fec0276e10a6f71177caa2942ff580664"} Dec 13 04:08:49 crc kubenswrapper[4766]: I1213 04:08:49.384125 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="332ed8b7-609a-4526-9e2a-e09b1723752e" containerName="glance-log" containerID="cri-o://704acdcc4939392621fca589957d11a121e4ef101810e8cfc902a1a3f5a61ad4" gracePeriod=30 Dec 13 04:08:49 crc kubenswrapper[4766]: I1213 04:08:49.384302 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="332ed8b7-609a-4526-9e2a-e09b1723752e" containerName="glance-httpd" containerID="cri-o://693cd6b6b8e0fb7da88161bbb5732bee241a73d4b57bf54e686e4bdb0471d2cd" gracePeriod=30 Dec 13 04:08:49 crc kubenswrapper[4766]: I1213 04:08:49.384341 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="332ed8b7-609a-4526-9e2a-e09b1723752e" containerName="glance-api" containerID="cri-o://7bd6f27dda8e4119fe954f4e46fdff80bcbc56fd3c49f39878fed94bbd0c795e" gracePeriod=30 Dec 13 04:08:49 crc kubenswrapper[4766]: I1213 04:08:49.408851 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-0" podStartSLOduration=2.408836789 podStartE2EDuration="2.408836789s" podCreationTimestamp="2025-12-13 04:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 04:08:49.406656335 +0000 UTC m=+1460.916589319" watchObservedRunningTime="2025-12-13 04:08:49.408836789 +0000 UTC m=+1460.918769753" Dec 13 04:08:49 crc kubenswrapper[4766]: I1213 04:08:49.445641 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-0" podStartSLOduration=3.445619273 podStartE2EDuration="3.445619273s" podCreationTimestamp="2025-12-13 04:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 04:08:49.43730481 +0000 UTC m=+1460.947237794" watchObservedRunningTime="2025-12-13 04:08:49.445619273 +0000 UTC m=+1460.955552237" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.346706 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.398311 4766 generic.go:334] "Generic (PLEG): container finished" podID="332ed8b7-609a-4526-9e2a-e09b1723752e" containerID="7bd6f27dda8e4119fe954f4e46fdff80bcbc56fd3c49f39878fed94bbd0c795e" exitCode=143 Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.398361 4766 generic.go:334] "Generic (PLEG): container finished" podID="332ed8b7-609a-4526-9e2a-e09b1723752e" containerID="693cd6b6b8e0fb7da88161bbb5732bee241a73d4b57bf54e686e4bdb0471d2cd" exitCode=143 Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.398368 4766 generic.go:334] "Generic (PLEG): container finished" podID="332ed8b7-609a-4526-9e2a-e09b1723752e" containerID="704acdcc4939392621fca589957d11a121e4ef101810e8cfc902a1a3f5a61ad4" exitCode=143 Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.399580 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"332ed8b7-609a-4526-9e2a-e09b1723752e","Type":"ContainerDied","Data":"7bd6f27dda8e4119fe954f4e46fdff80bcbc56fd3c49f39878fed94bbd0c795e"} Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.399652 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.399687 4766 scope.go:117] "RemoveContainer" containerID="7bd6f27dda8e4119fe954f4e46fdff80bcbc56fd3c49f39878fed94bbd0c795e" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.399669 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"332ed8b7-609a-4526-9e2a-e09b1723752e","Type":"ContainerDied","Data":"693cd6b6b8e0fb7da88161bbb5732bee241a73d4b57bf54e686e4bdb0471d2cd"} Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.399901 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"332ed8b7-609a-4526-9e2a-e09b1723752e","Type":"ContainerDied","Data":"704acdcc4939392621fca589957d11a121e4ef101810e8cfc902a1a3f5a61ad4"} Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.399954 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"332ed8b7-609a-4526-9e2a-e09b1723752e","Type":"ContainerDied","Data":"a54ac2f5ed46b42cddfa95ffa8f3f22fec0276e10a6f71177caa2942ff580664"} Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.430355 4766 scope.go:117] "RemoveContainer" containerID="693cd6b6b8e0fb7da88161bbb5732bee241a73d4b57bf54e686e4bdb0471d2cd" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.460031 4766 scope.go:117] "RemoveContainer" containerID="704acdcc4939392621fca589957d11a121e4ef101810e8cfc902a1a3f5a61ad4" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.477709 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-run\") pod \"332ed8b7-609a-4526-9e2a-e09b1723752e\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.477811 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"332ed8b7-609a-4526-9e2a-e09b1723752e\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.477889 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/332ed8b7-609a-4526-9e2a-e09b1723752e-logs\") pod \"332ed8b7-609a-4526-9e2a-e09b1723752e\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.477878 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-run" (OuterVolumeSpecName: "run") pod "332ed8b7-609a-4526-9e2a-e09b1723752e" (UID: "332ed8b7-609a-4526-9e2a-e09b1723752e"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.477935 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-sys\") pod \"332ed8b7-609a-4526-9e2a-e09b1723752e\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.478010 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-lib-modules\") pod \"332ed8b7-609a-4526-9e2a-e09b1723752e\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.478039 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qflq\" (UniqueName: \"kubernetes.io/projected/332ed8b7-609a-4526-9e2a-e09b1723752e-kube-api-access-4qflq\") pod \"332ed8b7-609a-4526-9e2a-e09b1723752e\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.478104 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/332ed8b7-609a-4526-9e2a-e09b1723752e-config-data\") pod \"332ed8b7-609a-4526-9e2a-e09b1723752e\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.478130 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/332ed8b7-609a-4526-9e2a-e09b1723752e-scripts\") pod \"332ed8b7-609a-4526-9e2a-e09b1723752e\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.478141 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-sys" (OuterVolumeSpecName: "sys") pod "332ed8b7-609a-4526-9e2a-e09b1723752e" (UID: "332ed8b7-609a-4526-9e2a-e09b1723752e"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.478161 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-var-locks-brick\") pod \"332ed8b7-609a-4526-9e2a-e09b1723752e\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.478182 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "332ed8b7-609a-4526-9e2a-e09b1723752e" (UID: "332ed8b7-609a-4526-9e2a-e09b1723752e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.478266 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-etc-iscsi\") pod \"332ed8b7-609a-4526-9e2a-e09b1723752e\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.478271 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/332ed8b7-609a-4526-9e2a-e09b1723752e-logs" (OuterVolumeSpecName: "logs") pod "332ed8b7-609a-4526-9e2a-e09b1723752e" (UID: "332ed8b7-609a-4526-9e2a-e09b1723752e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.478322 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "332ed8b7-609a-4526-9e2a-e09b1723752e" (UID: "332ed8b7-609a-4526-9e2a-e09b1723752e"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.478330 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-dev\") pod \"332ed8b7-609a-4526-9e2a-e09b1723752e\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.478355 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "332ed8b7-609a-4526-9e2a-e09b1723752e" (UID: "332ed8b7-609a-4526-9e2a-e09b1723752e"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.478464 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-dev" (OuterVolumeSpecName: "dev") pod "332ed8b7-609a-4526-9e2a-e09b1723752e" (UID: "332ed8b7-609a-4526-9e2a-e09b1723752e"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.478507 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"332ed8b7-609a-4526-9e2a-e09b1723752e\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.478536 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-etc-nvme\") pod \"332ed8b7-609a-4526-9e2a-e09b1723752e\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.478605 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/332ed8b7-609a-4526-9e2a-e09b1723752e-httpd-run\") pod \"332ed8b7-609a-4526-9e2a-e09b1723752e\" (UID: \"332ed8b7-609a-4526-9e2a-e09b1723752e\") " Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.479001 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "332ed8b7-609a-4526-9e2a-e09b1723752e" (UID: "332ed8b7-609a-4526-9e2a-e09b1723752e"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.479109 4766 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-sys\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.479127 4766 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-lib-modules\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.479137 4766 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-var-locks-brick\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.479149 4766 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-etc-iscsi\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.479158 4766 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-dev\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.479177 4766 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-etc-nvme\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.479185 4766 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/332ed8b7-609a-4526-9e2a-e09b1723752e-run\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.479193 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/332ed8b7-609a-4526-9e2a-e09b1723752e-logs\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.479290 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/332ed8b7-609a-4526-9e2a-e09b1723752e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "332ed8b7-609a-4526-9e2a-e09b1723752e" (UID: "332ed8b7-609a-4526-9e2a-e09b1723752e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.482643 4766 scope.go:117] "RemoveContainer" containerID="7bd6f27dda8e4119fe954f4e46fdff80bcbc56fd3c49f39878fed94bbd0c795e" Dec 13 04:08:50 crc kubenswrapper[4766]: E1213 04:08:50.484617 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bd6f27dda8e4119fe954f4e46fdff80bcbc56fd3c49f39878fed94bbd0c795e\": container with ID starting with 7bd6f27dda8e4119fe954f4e46fdff80bcbc56fd3c49f39878fed94bbd0c795e not found: ID does not exist" containerID="7bd6f27dda8e4119fe954f4e46fdff80bcbc56fd3c49f39878fed94bbd0c795e" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.484678 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bd6f27dda8e4119fe954f4e46fdff80bcbc56fd3c49f39878fed94bbd0c795e"} err="failed to get container status \"7bd6f27dda8e4119fe954f4e46fdff80bcbc56fd3c49f39878fed94bbd0c795e\": rpc error: code = NotFound desc = could not find container \"7bd6f27dda8e4119fe954f4e46fdff80bcbc56fd3c49f39878fed94bbd0c795e\": container with ID starting with 7bd6f27dda8e4119fe954f4e46fdff80bcbc56fd3c49f39878fed94bbd0c795e not found: ID does not exist" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.484728 4766 scope.go:117] "RemoveContainer" containerID="693cd6b6b8e0fb7da88161bbb5732bee241a73d4b57bf54e686e4bdb0471d2cd" Dec 13 04:08:50 crc kubenswrapper[4766]: E1213 04:08:50.485501 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"693cd6b6b8e0fb7da88161bbb5732bee241a73d4b57bf54e686e4bdb0471d2cd\": container with ID starting with 693cd6b6b8e0fb7da88161bbb5732bee241a73d4b57bf54e686e4bdb0471d2cd not found: ID does not exist" containerID="693cd6b6b8e0fb7da88161bbb5732bee241a73d4b57bf54e686e4bdb0471d2cd" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.485538 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"693cd6b6b8e0fb7da88161bbb5732bee241a73d4b57bf54e686e4bdb0471d2cd"} err="failed to get container status \"693cd6b6b8e0fb7da88161bbb5732bee241a73d4b57bf54e686e4bdb0471d2cd\": rpc error: code = NotFound desc = could not find container \"693cd6b6b8e0fb7da88161bbb5732bee241a73d4b57bf54e686e4bdb0471d2cd\": container with ID starting with 693cd6b6b8e0fb7da88161bbb5732bee241a73d4b57bf54e686e4bdb0471d2cd not found: ID does not exist" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.485567 4766 scope.go:117] "RemoveContainer" containerID="704acdcc4939392621fca589957d11a121e4ef101810e8cfc902a1a3f5a61ad4" Dec 13 04:08:50 crc kubenswrapper[4766]: E1213 04:08:50.486032 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"704acdcc4939392621fca589957d11a121e4ef101810e8cfc902a1a3f5a61ad4\": container with ID starting with 704acdcc4939392621fca589957d11a121e4ef101810e8cfc902a1a3f5a61ad4 not found: ID does not exist" containerID="704acdcc4939392621fca589957d11a121e4ef101810e8cfc902a1a3f5a61ad4" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.486104 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"704acdcc4939392621fca589957d11a121e4ef101810e8cfc902a1a3f5a61ad4"} err="failed to get container status \"704acdcc4939392621fca589957d11a121e4ef101810e8cfc902a1a3f5a61ad4\": rpc error: code = NotFound desc = could not find container \"704acdcc4939392621fca589957d11a121e4ef101810e8cfc902a1a3f5a61ad4\": container with ID starting with 704acdcc4939392621fca589957d11a121e4ef101810e8cfc902a1a3f5a61ad4 not found: ID does not exist" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.486159 4766 scope.go:117] "RemoveContainer" containerID="7bd6f27dda8e4119fe954f4e46fdff80bcbc56fd3c49f39878fed94bbd0c795e" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.486824 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bd6f27dda8e4119fe954f4e46fdff80bcbc56fd3c49f39878fed94bbd0c795e"} err="failed to get container status \"7bd6f27dda8e4119fe954f4e46fdff80bcbc56fd3c49f39878fed94bbd0c795e\": rpc error: code = NotFound desc = could not find container \"7bd6f27dda8e4119fe954f4e46fdff80bcbc56fd3c49f39878fed94bbd0c795e\": container with ID starting with 7bd6f27dda8e4119fe954f4e46fdff80bcbc56fd3c49f39878fed94bbd0c795e not found: ID does not exist" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.486874 4766 scope.go:117] "RemoveContainer" containerID="693cd6b6b8e0fb7da88161bbb5732bee241a73d4b57bf54e686e4bdb0471d2cd" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.487403 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"693cd6b6b8e0fb7da88161bbb5732bee241a73d4b57bf54e686e4bdb0471d2cd"} err="failed to get container status \"693cd6b6b8e0fb7da88161bbb5732bee241a73d4b57bf54e686e4bdb0471d2cd\": rpc error: code = NotFound desc = could not find container \"693cd6b6b8e0fb7da88161bbb5732bee241a73d4b57bf54e686e4bdb0471d2cd\": container with ID starting with 693cd6b6b8e0fb7da88161bbb5732bee241a73d4b57bf54e686e4bdb0471d2cd not found: ID does not exist" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.487421 4766 scope.go:117] "RemoveContainer" containerID="704acdcc4939392621fca589957d11a121e4ef101810e8cfc902a1a3f5a61ad4" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.487743 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "332ed8b7-609a-4526-9e2a-e09b1723752e" (UID: "332ed8b7-609a-4526-9e2a-e09b1723752e"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.487767 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"704acdcc4939392621fca589957d11a121e4ef101810e8cfc902a1a3f5a61ad4"} err="failed to get container status \"704acdcc4939392621fca589957d11a121e4ef101810e8cfc902a1a3f5a61ad4\": rpc error: code = NotFound desc = could not find container \"704acdcc4939392621fca589957d11a121e4ef101810e8cfc902a1a3f5a61ad4\": container with ID starting with 704acdcc4939392621fca589957d11a121e4ef101810e8cfc902a1a3f5a61ad4 not found: ID does not exist" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.487835 4766 scope.go:117] "RemoveContainer" containerID="7bd6f27dda8e4119fe954f4e46fdff80bcbc56fd3c49f39878fed94bbd0c795e" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.488265 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bd6f27dda8e4119fe954f4e46fdff80bcbc56fd3c49f39878fed94bbd0c795e"} err="failed to get container status \"7bd6f27dda8e4119fe954f4e46fdff80bcbc56fd3c49f39878fed94bbd0c795e\": rpc error: code = NotFound desc = could not find container \"7bd6f27dda8e4119fe954f4e46fdff80bcbc56fd3c49f39878fed94bbd0c795e\": container with ID starting with 7bd6f27dda8e4119fe954f4e46fdff80bcbc56fd3c49f39878fed94bbd0c795e not found: ID does not exist" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.488290 4766 scope.go:117] "RemoveContainer" containerID="693cd6b6b8e0fb7da88161bbb5732bee241a73d4b57bf54e686e4bdb0471d2cd" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.488640 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"693cd6b6b8e0fb7da88161bbb5732bee241a73d4b57bf54e686e4bdb0471d2cd"} err="failed to get container status \"693cd6b6b8e0fb7da88161bbb5732bee241a73d4b57bf54e686e4bdb0471d2cd\": rpc error: code = NotFound desc = could not find container \"693cd6b6b8e0fb7da88161bbb5732bee241a73d4b57bf54e686e4bdb0471d2cd\": container with ID starting with 693cd6b6b8e0fb7da88161bbb5732bee241a73d4b57bf54e686e4bdb0471d2cd not found: ID does not exist" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.488656 4766 scope.go:117] "RemoveContainer" containerID="704acdcc4939392621fca589957d11a121e4ef101810e8cfc902a1a3f5a61ad4" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.488972 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"704acdcc4939392621fca589957d11a121e4ef101810e8cfc902a1a3f5a61ad4"} err="failed to get container status \"704acdcc4939392621fca589957d11a121e4ef101810e8cfc902a1a3f5a61ad4\": rpc error: code = NotFound desc = could not find container \"704acdcc4939392621fca589957d11a121e4ef101810e8cfc902a1a3f5a61ad4\": container with ID starting with 704acdcc4939392621fca589957d11a121e4ef101810e8cfc902a1a3f5a61ad4 not found: ID does not exist" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.489043 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/332ed8b7-609a-4526-9e2a-e09b1723752e-kube-api-access-4qflq" (OuterVolumeSpecName: "kube-api-access-4qflq") pod "332ed8b7-609a-4526-9e2a-e09b1723752e" (UID: "332ed8b7-609a-4526-9e2a-e09b1723752e"). InnerVolumeSpecName "kube-api-access-4qflq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.491801 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/332ed8b7-609a-4526-9e2a-e09b1723752e-scripts" (OuterVolumeSpecName: "scripts") pod "332ed8b7-609a-4526-9e2a-e09b1723752e" (UID: "332ed8b7-609a-4526-9e2a-e09b1723752e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.495452 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance-cache") pod "332ed8b7-609a-4526-9e2a-e09b1723752e" (UID: "332ed8b7-609a-4526-9e2a-e09b1723752e"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.567797 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/332ed8b7-609a-4526-9e2a-e09b1723752e-config-data" (OuterVolumeSpecName: "config-data") pod "332ed8b7-609a-4526-9e2a-e09b1723752e" (UID: "332ed8b7-609a-4526-9e2a-e09b1723752e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.580999 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.581055 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qflq\" (UniqueName: \"kubernetes.io/projected/332ed8b7-609a-4526-9e2a-e09b1723752e-kube-api-access-4qflq\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.581073 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/332ed8b7-609a-4526-9e2a-e09b1723752e-config-data\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.581085 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/332ed8b7-609a-4526-9e2a-e09b1723752e-scripts\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.581104 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.581117 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/332ed8b7-609a-4526-9e2a-e09b1723752e-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.597691 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.611115 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.682467 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.682514 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.772248 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.779898 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.809007 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Dec 13 04:08:50 crc kubenswrapper[4766]: E1213 04:08:50.809419 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="332ed8b7-609a-4526-9e2a-e09b1723752e" containerName="glance-httpd" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.809468 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="332ed8b7-609a-4526-9e2a-e09b1723752e" containerName="glance-httpd" Dec 13 04:08:50 crc kubenswrapper[4766]: E1213 04:08:50.809479 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="332ed8b7-609a-4526-9e2a-e09b1723752e" containerName="glance-log" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.809486 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="332ed8b7-609a-4526-9e2a-e09b1723752e" containerName="glance-log" Dec 13 04:08:50 crc kubenswrapper[4766]: E1213 04:08:50.809508 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="332ed8b7-609a-4526-9e2a-e09b1723752e" containerName="glance-api" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.809515 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="332ed8b7-609a-4526-9e2a-e09b1723752e" containerName="glance-api" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.809730 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="332ed8b7-609a-4526-9e2a-e09b1723752e" containerName="glance-api" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.809761 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="332ed8b7-609a-4526-9e2a-e09b1723752e" containerName="glance-httpd" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.809778 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="332ed8b7-609a-4526-9e2a-e09b1723752e" containerName="glance-log" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.811102 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.818737 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-internal-config-data" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.825292 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.886595 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ac9d9fc3-8839-4a44-9b07-a981b6b61162-run\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.886657 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ac9d9fc3-8839-4a44-9b07-a981b6b61162-sys\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.886690 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lfqp\" (UniqueName: \"kubernetes.io/projected/ac9d9fc3-8839-4a44-9b07-a981b6b61162-kube-api-access-7lfqp\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.886721 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.886751 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ac9d9fc3-8839-4a44-9b07-a981b6b61162-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.886788 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.886811 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac9d9fc3-8839-4a44-9b07-a981b6b61162-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.886828 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ac9d9fc3-8839-4a44-9b07-a981b6b61162-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.886844 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ac9d9fc3-8839-4a44-9b07-a981b6b61162-dev\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.886877 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac9d9fc3-8839-4a44-9b07-a981b6b61162-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.886902 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac9d9fc3-8839-4a44-9b07-a981b6b61162-logs\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.886923 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ac9d9fc3-8839-4a44-9b07-a981b6b61162-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.886949 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac9d9fc3-8839-4a44-9b07-a981b6b61162-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.886972 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ac9d9fc3-8839-4a44-9b07-a981b6b61162-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.988623 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ac9d9fc3-8839-4a44-9b07-a981b6b61162-run\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.988692 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ac9d9fc3-8839-4a44-9b07-a981b6b61162-sys\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.988727 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lfqp\" (UniqueName: \"kubernetes.io/projected/ac9d9fc3-8839-4a44-9b07-a981b6b61162-kube-api-access-7lfqp\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.988756 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.988781 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ac9d9fc3-8839-4a44-9b07-a981b6b61162-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.988825 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.988850 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ac9d9fc3-8839-4a44-9b07-a981b6b61162-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.988869 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac9d9fc3-8839-4a44-9b07-a981b6b61162-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.988892 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ac9d9fc3-8839-4a44-9b07-a981b6b61162-dev\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.988934 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac9d9fc3-8839-4a44-9b07-a981b6b61162-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.988964 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac9d9fc3-8839-4a44-9b07-a981b6b61162-logs\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.989006 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ac9d9fc3-8839-4a44-9b07-a981b6b61162-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.989042 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac9d9fc3-8839-4a44-9b07-a981b6b61162-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.989073 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ac9d9fc3-8839-4a44-9b07-a981b6b61162-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.989168 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ac9d9fc3-8839-4a44-9b07-a981b6b61162-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.989217 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ac9d9fc3-8839-4a44-9b07-a981b6b61162-run\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.989247 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ac9d9fc3-8839-4a44-9b07-a981b6b61162-sys\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.989793 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") device mount path \"/mnt/openstack/pv11\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.999098 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ac9d9fc3-8839-4a44-9b07-a981b6b61162-dev\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.999207 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ac9d9fc3-8839-4a44-9b07-a981b6b61162-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.999345 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac9d9fc3-8839-4a44-9b07-a981b6b61162-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.999357 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") device mount path \"/mnt/openstack/pv12\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:50 crc kubenswrapper[4766]: I1213 04:08:50.999376 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ac9d9fc3-8839-4a44-9b07-a981b6b61162-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:51 crc kubenswrapper[4766]: I1213 04:08:50.999999 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ac9d9fc3-8839-4a44-9b07-a981b6b61162-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:51 crc kubenswrapper[4766]: I1213 04:08:51.000394 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac9d9fc3-8839-4a44-9b07-a981b6b61162-logs\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:51 crc kubenswrapper[4766]: I1213 04:08:51.004331 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac9d9fc3-8839-4a44-9b07-a981b6b61162-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:51 crc kubenswrapper[4766]: I1213 04:08:51.008311 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac9d9fc3-8839-4a44-9b07-a981b6b61162-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:51 crc kubenswrapper[4766]: I1213 04:08:51.008972 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lfqp\" (UniqueName: \"kubernetes.io/projected/ac9d9fc3-8839-4a44-9b07-a981b6b61162-kube-api-access-7lfqp\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:51 crc kubenswrapper[4766]: I1213 04:08:51.026646 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:51 crc kubenswrapper[4766]: I1213 04:08:51.054388 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"ac9d9fc3-8839-4a44-9b07-a981b6b61162\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:51 crc kubenswrapper[4766]: I1213 04:08:51.144494 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:08:51 crc kubenswrapper[4766]: I1213 04:08:51.631668 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="332ed8b7-609a-4526-9e2a-e09b1723752e" path="/var/lib/kubelet/pods/332ed8b7-609a-4526-9e2a-e09b1723752e/volumes" Dec 13 04:08:51 crc kubenswrapper[4766]: I1213 04:08:51.635957 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Dec 13 04:08:51 crc kubenswrapper[4766]: W1213 04:08:51.653806 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac9d9fc3_8839_4a44_9b07_a981b6b61162.slice/crio-187f036a3e8d6afada24b367e445e0f3c43e4f4fd7f9b34895a66b36e7a18a29 WatchSource:0}: Error finding container 187f036a3e8d6afada24b367e445e0f3c43e4f4fd7f9b34895a66b36e7a18a29: Status 404 returned error can't find the container with id 187f036a3e8d6afada24b367e445e0f3c43e4f4fd7f9b34895a66b36e7a18a29 Dec 13 04:08:52 crc kubenswrapper[4766]: I1213 04:08:52.428057 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"ac9d9fc3-8839-4a44-9b07-a981b6b61162","Type":"ContainerStarted","Data":"06ba6ea5ee9b152b6c4cce83afa8eeb06bc4433da1dc6176c36f8177bdb8985a"} Dec 13 04:08:52 crc kubenswrapper[4766]: I1213 04:08:52.428544 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"ac9d9fc3-8839-4a44-9b07-a981b6b61162","Type":"ContainerStarted","Data":"257b9c2154cedebbe078afbbd94049ae650261276f52657353fddc284574af13"} Dec 13 04:08:52 crc kubenswrapper[4766]: I1213 04:08:52.428569 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"ac9d9fc3-8839-4a44-9b07-a981b6b61162","Type":"ContainerStarted","Data":"96cab4872a5c1e259fd835dd2c4ec2ce702a064787c10de8130253438d184e79"} Dec 13 04:08:52 crc kubenswrapper[4766]: I1213 04:08:52.428583 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"ac9d9fc3-8839-4a44-9b07-a981b6b61162","Type":"ContainerStarted","Data":"187f036a3e8d6afada24b367e445e0f3c43e4f4fd7f9b34895a66b36e7a18a29"} Dec 13 04:08:52 crc kubenswrapper[4766]: I1213 04:08:52.459126 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-0" podStartSLOduration=2.4591077869999998 podStartE2EDuration="2.459107787s" podCreationTimestamp="2025-12-13 04:08:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 04:08:52.450462935 +0000 UTC m=+1463.960395899" watchObservedRunningTime="2025-12-13 04:08:52.459107787 +0000 UTC m=+1463.969040751" Dec 13 04:08:57 crc kubenswrapper[4766]: I1213 04:08:57.757838 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:57 crc kubenswrapper[4766]: I1213 04:08:57.758462 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:57 crc kubenswrapper[4766]: I1213 04:08:57.758478 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:57 crc kubenswrapper[4766]: I1213 04:08:57.785006 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:57 crc kubenswrapper[4766]: I1213 04:08:57.788032 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:57 crc kubenswrapper[4766]: I1213 04:08:57.806275 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:58 crc kubenswrapper[4766]: I1213 04:08:58.489859 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:58 crc kubenswrapper[4766]: I1213 04:08:58.489932 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:58 crc kubenswrapper[4766]: I1213 04:08:58.489946 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:58 crc kubenswrapper[4766]: I1213 04:08:58.503832 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:58 crc kubenswrapper[4766]: I1213 04:08:58.504541 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:08:58 crc kubenswrapper[4766]: I1213 04:08:58.508955 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Dec 13 04:09:01 crc kubenswrapper[4766]: I1213 04:09:01.146044 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:09:01 crc kubenswrapper[4766]: I1213 04:09:01.146887 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:09:01 crc kubenswrapper[4766]: I1213 04:09:01.146901 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:09:01 crc kubenswrapper[4766]: I1213 04:09:01.174064 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:09:01 crc kubenswrapper[4766]: I1213 04:09:01.174693 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:09:01 crc kubenswrapper[4766]: I1213 04:09:01.190117 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:09:01 crc kubenswrapper[4766]: I1213 04:09:01.520700 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:09:01 crc kubenswrapper[4766]: I1213 04:09:01.520767 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:09:01 crc kubenswrapper[4766]: I1213 04:09:01.520778 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:09:01 crc kubenswrapper[4766]: I1213 04:09:01.535795 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:09:01 crc kubenswrapper[4766]: I1213 04:09:01.537515 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:09:01 crc kubenswrapper[4766]: I1213 04:09:01.538067 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Dec 13 04:09:09 crc kubenswrapper[4766]: I1213 04:09:09.733356 4766 patch_prober.go:28] interesting pod/machine-config-daemon-94w9l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 04:09:09 crc kubenswrapper[4766]: I1213 04:09:09.734243 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 04:09:09 crc kubenswrapper[4766]: I1213 04:09:09.734324 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" Dec 13 04:09:09 crc kubenswrapper[4766]: I1213 04:09:09.735801 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1b1133c0e727136c79aaa5f9da8214a41f9cc9ae7bd7f759a1497d0dcecad697"} pod="openshift-machine-config-operator/machine-config-daemon-94w9l" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 13 04:09:09 crc kubenswrapper[4766]: I1213 04:09:09.735920 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" containerID="cri-o://1b1133c0e727136c79aaa5f9da8214a41f9cc9ae7bd7f759a1497d0dcecad697" gracePeriod=600 Dec 13 04:09:10 crc kubenswrapper[4766]: I1213 04:09:10.620831 4766 generic.go:334] "Generic (PLEG): container finished" podID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerID="1b1133c0e727136c79aaa5f9da8214a41f9cc9ae7bd7f759a1497d0dcecad697" exitCode=0 Dec 13 04:09:10 crc kubenswrapper[4766]: I1213 04:09:10.621283 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" event={"ID":"71e6a48b-4f5d-4299-9c7b-98dbe11e670e","Type":"ContainerDied","Data":"1b1133c0e727136c79aaa5f9da8214a41f9cc9ae7bd7f759a1497d0dcecad697"} Dec 13 04:09:10 crc kubenswrapper[4766]: I1213 04:09:10.621311 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" event={"ID":"71e6a48b-4f5d-4299-9c7b-98dbe11e670e","Type":"ContainerStarted","Data":"73b359d6042adaed91cafcdce588efdce0b2235b226906aa3955206639853ae2"} Dec 13 04:09:10 crc kubenswrapper[4766]: I1213 04:09:10.621328 4766 scope.go:117] "RemoveContainer" containerID="dc36ef313dc384ef79916b9e90a1008e5893b470ee70f62a783fefe019c14e1a" Dec 13 04:11:16 crc kubenswrapper[4766]: I1213 04:11:16.626835 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tfv9t"] Dec 13 04:11:16 crc kubenswrapper[4766]: I1213 04:11:16.629120 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tfv9t" Dec 13 04:11:16 crc kubenswrapper[4766]: I1213 04:11:16.645145 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tfv9t"] Dec 13 04:11:16 crc kubenswrapper[4766]: I1213 04:11:16.645632 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3b40457-c850-49d6-98c0-cfefa8b1debd-catalog-content\") pod \"community-operators-tfv9t\" (UID: \"b3b40457-c850-49d6-98c0-cfefa8b1debd\") " pod="openshift-marketplace/community-operators-tfv9t" Dec 13 04:11:16 crc kubenswrapper[4766]: I1213 04:11:16.645698 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhckc\" (UniqueName: \"kubernetes.io/projected/b3b40457-c850-49d6-98c0-cfefa8b1debd-kube-api-access-vhckc\") pod \"community-operators-tfv9t\" (UID: \"b3b40457-c850-49d6-98c0-cfefa8b1debd\") " pod="openshift-marketplace/community-operators-tfv9t" Dec 13 04:11:16 crc kubenswrapper[4766]: I1213 04:11:16.645739 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3b40457-c850-49d6-98c0-cfefa8b1debd-utilities\") pod \"community-operators-tfv9t\" (UID: \"b3b40457-c850-49d6-98c0-cfefa8b1debd\") " pod="openshift-marketplace/community-operators-tfv9t" Dec 13 04:11:16 crc kubenswrapper[4766]: I1213 04:11:16.746797 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3b40457-c850-49d6-98c0-cfefa8b1debd-catalog-content\") pod \"community-operators-tfv9t\" (UID: \"b3b40457-c850-49d6-98c0-cfefa8b1debd\") " pod="openshift-marketplace/community-operators-tfv9t" Dec 13 04:11:16 crc kubenswrapper[4766]: I1213 04:11:16.746849 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhckc\" (UniqueName: \"kubernetes.io/projected/b3b40457-c850-49d6-98c0-cfefa8b1debd-kube-api-access-vhckc\") pod \"community-operators-tfv9t\" (UID: \"b3b40457-c850-49d6-98c0-cfefa8b1debd\") " pod="openshift-marketplace/community-operators-tfv9t" Dec 13 04:11:16 crc kubenswrapper[4766]: I1213 04:11:16.746884 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3b40457-c850-49d6-98c0-cfefa8b1debd-utilities\") pod \"community-operators-tfv9t\" (UID: \"b3b40457-c850-49d6-98c0-cfefa8b1debd\") " pod="openshift-marketplace/community-operators-tfv9t" Dec 13 04:11:16 crc kubenswrapper[4766]: I1213 04:11:16.747404 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3b40457-c850-49d6-98c0-cfefa8b1debd-utilities\") pod \"community-operators-tfv9t\" (UID: \"b3b40457-c850-49d6-98c0-cfefa8b1debd\") " pod="openshift-marketplace/community-operators-tfv9t" Dec 13 04:11:16 crc kubenswrapper[4766]: I1213 04:11:16.747404 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3b40457-c850-49d6-98c0-cfefa8b1debd-catalog-content\") pod \"community-operators-tfv9t\" (UID: \"b3b40457-c850-49d6-98c0-cfefa8b1debd\") " pod="openshift-marketplace/community-operators-tfv9t" Dec 13 04:11:16 crc kubenswrapper[4766]: I1213 04:11:16.768031 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhckc\" (UniqueName: \"kubernetes.io/projected/b3b40457-c850-49d6-98c0-cfefa8b1debd-kube-api-access-vhckc\") pod \"community-operators-tfv9t\" (UID: \"b3b40457-c850-49d6-98c0-cfefa8b1debd\") " pod="openshift-marketplace/community-operators-tfv9t" Dec 13 04:11:16 crc kubenswrapper[4766]: I1213 04:11:16.956724 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tfv9t" Dec 13 04:11:17 crc kubenswrapper[4766]: I1213 04:11:17.685763 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tfv9t"] Dec 13 04:11:17 crc kubenswrapper[4766]: I1213 04:11:17.819075 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfv9t" event={"ID":"b3b40457-c850-49d6-98c0-cfefa8b1debd","Type":"ContainerStarted","Data":"390f642812c5cb6cec5784566d11f2b88a90a4ae26f01dca9df8c7fef7db734e"} Dec 13 04:11:18 crc kubenswrapper[4766]: I1213 04:11:18.850019 4766 generic.go:334] "Generic (PLEG): container finished" podID="b3b40457-c850-49d6-98c0-cfefa8b1debd" containerID="1e80f7fd9e809df46d13932d2d12a05bfb03afa193d385250361678516fc8acc" exitCode=0 Dec 13 04:11:18 crc kubenswrapper[4766]: I1213 04:11:18.851686 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfv9t" event={"ID":"b3b40457-c850-49d6-98c0-cfefa8b1debd","Type":"ContainerDied","Data":"1e80f7fd9e809df46d13932d2d12a05bfb03afa193d385250361678516fc8acc"} Dec 13 04:11:18 crc kubenswrapper[4766]: I1213 04:11:18.855036 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 13 04:11:20 crc kubenswrapper[4766]: I1213 04:11:20.868997 4766 generic.go:334] "Generic (PLEG): container finished" podID="b3b40457-c850-49d6-98c0-cfefa8b1debd" containerID="a609dfde9ea99a550088fc5186d5ca4f1318b22611efa05ec36ddd1d4446830d" exitCode=0 Dec 13 04:11:20 crc kubenswrapper[4766]: I1213 04:11:20.869107 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfv9t" event={"ID":"b3b40457-c850-49d6-98c0-cfefa8b1debd","Type":"ContainerDied","Data":"a609dfde9ea99a550088fc5186d5ca4f1318b22611efa05ec36ddd1d4446830d"} Dec 13 04:11:21 crc kubenswrapper[4766]: I1213 04:11:21.903196 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfv9t" event={"ID":"b3b40457-c850-49d6-98c0-cfefa8b1debd","Type":"ContainerStarted","Data":"e647c06a146e0811c4b3fbcd1972b8aee4a61f2125795622905550b7776f2ebd"} Dec 13 04:11:21 crc kubenswrapper[4766]: I1213 04:11:21.927045 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tfv9t" podStartSLOduration=3.501484881 podStartE2EDuration="5.927024713s" podCreationTimestamp="2025-12-13 04:11:16 +0000 UTC" firstStartedPulling="2025-12-13 04:11:18.854687126 +0000 UTC m=+1610.364620090" lastFinishedPulling="2025-12-13 04:11:21.280226958 +0000 UTC m=+1612.790159922" observedRunningTime="2025-12-13 04:11:21.918264049 +0000 UTC m=+1613.428197013" watchObservedRunningTime="2025-12-13 04:11:21.927024713 +0000 UTC m=+1613.436957677" Dec 13 04:11:26 crc kubenswrapper[4766]: I1213 04:11:26.957101 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tfv9t" Dec 13 04:11:26 crc kubenswrapper[4766]: I1213 04:11:26.957747 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tfv9t" Dec 13 04:11:27 crc kubenswrapper[4766]: I1213 04:11:27.021790 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tfv9t" Dec 13 04:11:28 crc kubenswrapper[4766]: I1213 04:11:28.011223 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tfv9t" Dec 13 04:11:28 crc kubenswrapper[4766]: I1213 04:11:28.066483 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tfv9t"] Dec 13 04:11:29 crc kubenswrapper[4766]: I1213 04:11:29.979629 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tfv9t" podUID="b3b40457-c850-49d6-98c0-cfefa8b1debd" containerName="registry-server" containerID="cri-o://e647c06a146e0811c4b3fbcd1972b8aee4a61f2125795622905550b7776f2ebd" gracePeriod=2 Dec 13 04:11:30 crc kubenswrapper[4766]: I1213 04:11:30.437864 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tfv9t" Dec 13 04:11:30 crc kubenswrapper[4766]: I1213 04:11:30.560518 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3b40457-c850-49d6-98c0-cfefa8b1debd-catalog-content\") pod \"b3b40457-c850-49d6-98c0-cfefa8b1debd\" (UID: \"b3b40457-c850-49d6-98c0-cfefa8b1debd\") " Dec 13 04:11:30 crc kubenswrapper[4766]: I1213 04:11:30.560657 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3b40457-c850-49d6-98c0-cfefa8b1debd-utilities\") pod \"b3b40457-c850-49d6-98c0-cfefa8b1debd\" (UID: \"b3b40457-c850-49d6-98c0-cfefa8b1debd\") " Dec 13 04:11:30 crc kubenswrapper[4766]: I1213 04:11:30.560739 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhckc\" (UniqueName: \"kubernetes.io/projected/b3b40457-c850-49d6-98c0-cfefa8b1debd-kube-api-access-vhckc\") pod \"b3b40457-c850-49d6-98c0-cfefa8b1debd\" (UID: \"b3b40457-c850-49d6-98c0-cfefa8b1debd\") " Dec 13 04:11:30 crc kubenswrapper[4766]: I1213 04:11:30.561543 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3b40457-c850-49d6-98c0-cfefa8b1debd-utilities" (OuterVolumeSpecName: "utilities") pod "b3b40457-c850-49d6-98c0-cfefa8b1debd" (UID: "b3b40457-c850-49d6-98c0-cfefa8b1debd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:11:30 crc kubenswrapper[4766]: I1213 04:11:30.569462 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3b40457-c850-49d6-98c0-cfefa8b1debd-kube-api-access-vhckc" (OuterVolumeSpecName: "kube-api-access-vhckc") pod "b3b40457-c850-49d6-98c0-cfefa8b1debd" (UID: "b3b40457-c850-49d6-98c0-cfefa8b1debd"). InnerVolumeSpecName "kube-api-access-vhckc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:11:30 crc kubenswrapper[4766]: I1213 04:11:30.610674 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3b40457-c850-49d6-98c0-cfefa8b1debd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3b40457-c850-49d6-98c0-cfefa8b1debd" (UID: "b3b40457-c850-49d6-98c0-cfefa8b1debd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:11:30 crc kubenswrapper[4766]: I1213 04:11:30.663887 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3b40457-c850-49d6-98c0-cfefa8b1debd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 04:11:30 crc kubenswrapper[4766]: I1213 04:11:30.663981 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3b40457-c850-49d6-98c0-cfefa8b1debd-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 04:11:30 crc kubenswrapper[4766]: I1213 04:11:30.664055 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhckc\" (UniqueName: \"kubernetes.io/projected/b3b40457-c850-49d6-98c0-cfefa8b1debd-kube-api-access-vhckc\") on node \"crc\" DevicePath \"\"" Dec 13 04:11:30 crc kubenswrapper[4766]: I1213 04:11:30.989510 4766 generic.go:334] "Generic (PLEG): container finished" podID="b3b40457-c850-49d6-98c0-cfefa8b1debd" containerID="e647c06a146e0811c4b3fbcd1972b8aee4a61f2125795622905550b7776f2ebd" exitCode=0 Dec 13 04:11:30 crc kubenswrapper[4766]: I1213 04:11:30.989587 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tfv9t" Dec 13 04:11:30 crc kubenswrapper[4766]: I1213 04:11:30.989610 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfv9t" event={"ID":"b3b40457-c850-49d6-98c0-cfefa8b1debd","Type":"ContainerDied","Data":"e647c06a146e0811c4b3fbcd1972b8aee4a61f2125795622905550b7776f2ebd"} Dec 13 04:11:30 crc kubenswrapper[4766]: I1213 04:11:30.989951 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfv9t" event={"ID":"b3b40457-c850-49d6-98c0-cfefa8b1debd","Type":"ContainerDied","Data":"390f642812c5cb6cec5784566d11f2b88a90a4ae26f01dca9df8c7fef7db734e"} Dec 13 04:11:30 crc kubenswrapper[4766]: I1213 04:11:30.989993 4766 scope.go:117] "RemoveContainer" containerID="e647c06a146e0811c4b3fbcd1972b8aee4a61f2125795622905550b7776f2ebd" Dec 13 04:11:31 crc kubenswrapper[4766]: I1213 04:11:31.012017 4766 scope.go:117] "RemoveContainer" containerID="a609dfde9ea99a550088fc5186d5ca4f1318b22611efa05ec36ddd1d4446830d" Dec 13 04:11:31 crc kubenswrapper[4766]: I1213 04:11:31.037408 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tfv9t"] Dec 13 04:11:31 crc kubenswrapper[4766]: I1213 04:11:31.056366 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tfv9t"] Dec 13 04:11:31 crc kubenswrapper[4766]: I1213 04:11:31.064593 4766 scope.go:117] "RemoveContainer" containerID="1e80f7fd9e809df46d13932d2d12a05bfb03afa193d385250361678516fc8acc" Dec 13 04:11:31 crc kubenswrapper[4766]: I1213 04:11:31.088349 4766 scope.go:117] "RemoveContainer" containerID="e647c06a146e0811c4b3fbcd1972b8aee4a61f2125795622905550b7776f2ebd" Dec 13 04:11:31 crc kubenswrapper[4766]: E1213 04:11:31.098657 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e647c06a146e0811c4b3fbcd1972b8aee4a61f2125795622905550b7776f2ebd\": container with ID starting with e647c06a146e0811c4b3fbcd1972b8aee4a61f2125795622905550b7776f2ebd not found: ID does not exist" containerID="e647c06a146e0811c4b3fbcd1972b8aee4a61f2125795622905550b7776f2ebd" Dec 13 04:11:31 crc kubenswrapper[4766]: I1213 04:11:31.098751 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e647c06a146e0811c4b3fbcd1972b8aee4a61f2125795622905550b7776f2ebd"} err="failed to get container status \"e647c06a146e0811c4b3fbcd1972b8aee4a61f2125795622905550b7776f2ebd\": rpc error: code = NotFound desc = could not find container \"e647c06a146e0811c4b3fbcd1972b8aee4a61f2125795622905550b7776f2ebd\": container with ID starting with e647c06a146e0811c4b3fbcd1972b8aee4a61f2125795622905550b7776f2ebd not found: ID does not exist" Dec 13 04:11:31 crc kubenswrapper[4766]: I1213 04:11:31.098799 4766 scope.go:117] "RemoveContainer" containerID="a609dfde9ea99a550088fc5186d5ca4f1318b22611efa05ec36ddd1d4446830d" Dec 13 04:11:31 crc kubenswrapper[4766]: E1213 04:11:31.102610 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a609dfde9ea99a550088fc5186d5ca4f1318b22611efa05ec36ddd1d4446830d\": container with ID starting with a609dfde9ea99a550088fc5186d5ca4f1318b22611efa05ec36ddd1d4446830d not found: ID does not exist" containerID="a609dfde9ea99a550088fc5186d5ca4f1318b22611efa05ec36ddd1d4446830d" Dec 13 04:11:31 crc kubenswrapper[4766]: I1213 04:11:31.102666 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a609dfde9ea99a550088fc5186d5ca4f1318b22611efa05ec36ddd1d4446830d"} err="failed to get container status \"a609dfde9ea99a550088fc5186d5ca4f1318b22611efa05ec36ddd1d4446830d\": rpc error: code = NotFound desc = could not find container \"a609dfde9ea99a550088fc5186d5ca4f1318b22611efa05ec36ddd1d4446830d\": container with ID starting with a609dfde9ea99a550088fc5186d5ca4f1318b22611efa05ec36ddd1d4446830d not found: ID does not exist" Dec 13 04:11:31 crc kubenswrapper[4766]: I1213 04:11:31.102698 4766 scope.go:117] "RemoveContainer" containerID="1e80f7fd9e809df46d13932d2d12a05bfb03afa193d385250361678516fc8acc" Dec 13 04:11:31 crc kubenswrapper[4766]: E1213 04:11:31.105652 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e80f7fd9e809df46d13932d2d12a05bfb03afa193d385250361678516fc8acc\": container with ID starting with 1e80f7fd9e809df46d13932d2d12a05bfb03afa193d385250361678516fc8acc not found: ID does not exist" containerID="1e80f7fd9e809df46d13932d2d12a05bfb03afa193d385250361678516fc8acc" Dec 13 04:11:31 crc kubenswrapper[4766]: I1213 04:11:31.105699 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e80f7fd9e809df46d13932d2d12a05bfb03afa193d385250361678516fc8acc"} err="failed to get container status \"1e80f7fd9e809df46d13932d2d12a05bfb03afa193d385250361678516fc8acc\": rpc error: code = NotFound desc = could not find container \"1e80f7fd9e809df46d13932d2d12a05bfb03afa193d385250361678516fc8acc\": container with ID starting with 1e80f7fd9e809df46d13932d2d12a05bfb03afa193d385250361678516fc8acc not found: ID does not exist" Dec 13 04:11:31 crc kubenswrapper[4766]: I1213 04:11:31.626306 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3b40457-c850-49d6-98c0-cfefa8b1debd" path="/var/lib/kubelet/pods/b3b40457-c850-49d6-98c0-cfefa8b1debd/volumes" Dec 13 04:11:35 crc kubenswrapper[4766]: I1213 04:11:35.726147 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6vp7l"] Dec 13 04:11:35 crc kubenswrapper[4766]: E1213 04:11:35.727952 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b40457-c850-49d6-98c0-cfefa8b1debd" containerName="extract-content" Dec 13 04:11:35 crc kubenswrapper[4766]: I1213 04:11:35.727993 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b40457-c850-49d6-98c0-cfefa8b1debd" containerName="extract-content" Dec 13 04:11:35 crc kubenswrapper[4766]: E1213 04:11:35.728029 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b40457-c850-49d6-98c0-cfefa8b1debd" containerName="extract-utilities" Dec 13 04:11:35 crc kubenswrapper[4766]: I1213 04:11:35.728039 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b40457-c850-49d6-98c0-cfefa8b1debd" containerName="extract-utilities" Dec 13 04:11:35 crc kubenswrapper[4766]: E1213 04:11:35.728057 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b40457-c850-49d6-98c0-cfefa8b1debd" containerName="registry-server" Dec 13 04:11:35 crc kubenswrapper[4766]: I1213 04:11:35.728064 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b40457-c850-49d6-98c0-cfefa8b1debd" containerName="registry-server" Dec 13 04:11:35 crc kubenswrapper[4766]: I1213 04:11:35.728224 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3b40457-c850-49d6-98c0-cfefa8b1debd" containerName="registry-server" Dec 13 04:11:35 crc kubenswrapper[4766]: I1213 04:11:35.729510 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6vp7l" Dec 13 04:11:35 crc kubenswrapper[4766]: I1213 04:11:35.743112 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vp7l"] Dec 13 04:11:35 crc kubenswrapper[4766]: I1213 04:11:35.855849 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bff5155-48a9-4ff1-af39-b85a2d9f62ff-utilities\") pod \"redhat-marketplace-6vp7l\" (UID: \"8bff5155-48a9-4ff1-af39-b85a2d9f62ff\") " pod="openshift-marketplace/redhat-marketplace-6vp7l" Dec 13 04:11:35 crc kubenswrapper[4766]: I1213 04:11:35.855933 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgfvv\" (UniqueName: \"kubernetes.io/projected/8bff5155-48a9-4ff1-af39-b85a2d9f62ff-kube-api-access-pgfvv\") pod \"redhat-marketplace-6vp7l\" (UID: \"8bff5155-48a9-4ff1-af39-b85a2d9f62ff\") " pod="openshift-marketplace/redhat-marketplace-6vp7l" Dec 13 04:11:35 crc kubenswrapper[4766]: I1213 04:11:35.855998 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bff5155-48a9-4ff1-af39-b85a2d9f62ff-catalog-content\") pod \"redhat-marketplace-6vp7l\" (UID: \"8bff5155-48a9-4ff1-af39-b85a2d9f62ff\") " pod="openshift-marketplace/redhat-marketplace-6vp7l" Dec 13 04:11:35 crc kubenswrapper[4766]: I1213 04:11:35.957515 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgfvv\" (UniqueName: \"kubernetes.io/projected/8bff5155-48a9-4ff1-af39-b85a2d9f62ff-kube-api-access-pgfvv\") pod \"redhat-marketplace-6vp7l\" (UID: \"8bff5155-48a9-4ff1-af39-b85a2d9f62ff\") " pod="openshift-marketplace/redhat-marketplace-6vp7l" Dec 13 04:11:35 crc kubenswrapper[4766]: I1213 04:11:35.957621 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bff5155-48a9-4ff1-af39-b85a2d9f62ff-catalog-content\") pod \"redhat-marketplace-6vp7l\" (UID: \"8bff5155-48a9-4ff1-af39-b85a2d9f62ff\") " pod="openshift-marketplace/redhat-marketplace-6vp7l" Dec 13 04:11:35 crc kubenswrapper[4766]: I1213 04:11:35.957698 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bff5155-48a9-4ff1-af39-b85a2d9f62ff-utilities\") pod \"redhat-marketplace-6vp7l\" (UID: \"8bff5155-48a9-4ff1-af39-b85a2d9f62ff\") " pod="openshift-marketplace/redhat-marketplace-6vp7l" Dec 13 04:11:35 crc kubenswrapper[4766]: I1213 04:11:35.958239 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bff5155-48a9-4ff1-af39-b85a2d9f62ff-catalog-content\") pod \"redhat-marketplace-6vp7l\" (UID: \"8bff5155-48a9-4ff1-af39-b85a2d9f62ff\") " pod="openshift-marketplace/redhat-marketplace-6vp7l" Dec 13 04:11:35 crc kubenswrapper[4766]: I1213 04:11:35.958247 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bff5155-48a9-4ff1-af39-b85a2d9f62ff-utilities\") pod \"redhat-marketplace-6vp7l\" (UID: \"8bff5155-48a9-4ff1-af39-b85a2d9f62ff\") " pod="openshift-marketplace/redhat-marketplace-6vp7l" Dec 13 04:11:35 crc kubenswrapper[4766]: I1213 04:11:35.979625 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgfvv\" (UniqueName: \"kubernetes.io/projected/8bff5155-48a9-4ff1-af39-b85a2d9f62ff-kube-api-access-pgfvv\") pod \"redhat-marketplace-6vp7l\" (UID: \"8bff5155-48a9-4ff1-af39-b85a2d9f62ff\") " pod="openshift-marketplace/redhat-marketplace-6vp7l" Dec 13 04:11:36 crc kubenswrapper[4766]: I1213 04:11:36.061110 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6vp7l" Dec 13 04:11:36 crc kubenswrapper[4766]: I1213 04:11:36.557419 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vp7l"] Dec 13 04:11:37 crc kubenswrapper[4766]: I1213 04:11:37.041336 4766 generic.go:334] "Generic (PLEG): container finished" podID="8bff5155-48a9-4ff1-af39-b85a2d9f62ff" containerID="83124e814520f1daebdc914ef60ad18e8bdca0a7d66c5b880b552bc19485d608" exitCode=0 Dec 13 04:11:37 crc kubenswrapper[4766]: I1213 04:11:37.041409 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vp7l" event={"ID":"8bff5155-48a9-4ff1-af39-b85a2d9f62ff","Type":"ContainerDied","Data":"83124e814520f1daebdc914ef60ad18e8bdca0a7d66c5b880b552bc19485d608"} Dec 13 04:11:37 crc kubenswrapper[4766]: I1213 04:11:37.041495 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vp7l" event={"ID":"8bff5155-48a9-4ff1-af39-b85a2d9f62ff","Type":"ContainerStarted","Data":"0f85f3784ef5070ca7479b13f59f2639398bf72994f8649304e6acc58f8589ea"} Dec 13 04:11:38 crc kubenswrapper[4766]: I1213 04:11:38.051599 4766 generic.go:334] "Generic (PLEG): container finished" podID="8bff5155-48a9-4ff1-af39-b85a2d9f62ff" containerID="c29d5720b2e47f6ca4e60d218df30f63585b4641fcf51016e8cecc44d17f6854" exitCode=0 Dec 13 04:11:38 crc kubenswrapper[4766]: I1213 04:11:38.051911 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vp7l" event={"ID":"8bff5155-48a9-4ff1-af39-b85a2d9f62ff","Type":"ContainerDied","Data":"c29d5720b2e47f6ca4e60d218df30f63585b4641fcf51016e8cecc44d17f6854"} Dec 13 04:11:39 crc kubenswrapper[4766]: I1213 04:11:39.062580 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vp7l" event={"ID":"8bff5155-48a9-4ff1-af39-b85a2d9f62ff","Type":"ContainerStarted","Data":"8f0b9db1330e877c45d3a16e14e2611987c0407b883888843a3668b15adde0a9"} Dec 13 04:11:39 crc kubenswrapper[4766]: I1213 04:11:39.087643 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6vp7l" podStartSLOduration=2.650492362 podStartE2EDuration="4.087625382s" podCreationTimestamp="2025-12-13 04:11:35 +0000 UTC" firstStartedPulling="2025-12-13 04:11:37.045275465 +0000 UTC m=+1628.555208429" lastFinishedPulling="2025-12-13 04:11:38.482408485 +0000 UTC m=+1629.992341449" observedRunningTime="2025-12-13 04:11:39.085883172 +0000 UTC m=+1630.595816146" watchObservedRunningTime="2025-12-13 04:11:39.087625382 +0000 UTC m=+1630.597558346" Dec 13 04:11:39 crc kubenswrapper[4766]: I1213 04:11:39.733519 4766 patch_prober.go:28] interesting pod/machine-config-daemon-94w9l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 04:11:39 crc kubenswrapper[4766]: I1213 04:11:39.733901 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 04:11:46 crc kubenswrapper[4766]: I1213 04:11:46.061802 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6vp7l" Dec 13 04:11:46 crc kubenswrapper[4766]: I1213 04:11:46.062511 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6vp7l" Dec 13 04:11:46 crc kubenswrapper[4766]: I1213 04:11:46.117543 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6vp7l" Dec 13 04:11:46 crc kubenswrapper[4766]: I1213 04:11:46.180211 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6vp7l" Dec 13 04:11:46 crc kubenswrapper[4766]: I1213 04:11:46.359359 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vp7l"] Dec 13 04:11:48 crc kubenswrapper[4766]: I1213 04:11:48.126799 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6vp7l" podUID="8bff5155-48a9-4ff1-af39-b85a2d9f62ff" containerName="registry-server" containerID="cri-o://8f0b9db1330e877c45d3a16e14e2611987c0407b883888843a3668b15adde0a9" gracePeriod=2 Dec 13 04:11:49 crc kubenswrapper[4766]: I1213 04:11:49.048865 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6vp7l" Dec 13 04:11:49 crc kubenswrapper[4766]: I1213 04:11:49.139015 4766 generic.go:334] "Generic (PLEG): container finished" podID="8bff5155-48a9-4ff1-af39-b85a2d9f62ff" containerID="8f0b9db1330e877c45d3a16e14e2611987c0407b883888843a3668b15adde0a9" exitCode=0 Dec 13 04:11:49 crc kubenswrapper[4766]: I1213 04:11:49.139059 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vp7l" event={"ID":"8bff5155-48a9-4ff1-af39-b85a2d9f62ff","Type":"ContainerDied","Data":"8f0b9db1330e877c45d3a16e14e2611987c0407b883888843a3668b15adde0a9"} Dec 13 04:11:49 crc kubenswrapper[4766]: I1213 04:11:49.139096 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6vp7l" Dec 13 04:11:49 crc kubenswrapper[4766]: I1213 04:11:49.139127 4766 scope.go:117] "RemoveContainer" containerID="8f0b9db1330e877c45d3a16e14e2611987c0407b883888843a3668b15adde0a9" Dec 13 04:11:49 crc kubenswrapper[4766]: I1213 04:11:49.139098 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vp7l" event={"ID":"8bff5155-48a9-4ff1-af39-b85a2d9f62ff","Type":"ContainerDied","Data":"0f85f3784ef5070ca7479b13f59f2639398bf72994f8649304e6acc58f8589ea"} Dec 13 04:11:49 crc kubenswrapper[4766]: I1213 04:11:49.162215 4766 scope.go:117] "RemoveContainer" containerID="c29d5720b2e47f6ca4e60d218df30f63585b4641fcf51016e8cecc44d17f6854" Dec 13 04:11:49 crc kubenswrapper[4766]: I1213 04:11:49.182450 4766 scope.go:117] "RemoveContainer" containerID="83124e814520f1daebdc914ef60ad18e8bdca0a7d66c5b880b552bc19485d608" Dec 13 04:11:49 crc kubenswrapper[4766]: I1213 04:11:49.207721 4766 scope.go:117] "RemoveContainer" containerID="8f0b9db1330e877c45d3a16e14e2611987c0407b883888843a3668b15adde0a9" Dec 13 04:11:49 crc kubenswrapper[4766]: E1213 04:11:49.208519 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f0b9db1330e877c45d3a16e14e2611987c0407b883888843a3668b15adde0a9\": container with ID starting with 8f0b9db1330e877c45d3a16e14e2611987c0407b883888843a3668b15adde0a9 not found: ID does not exist" containerID="8f0b9db1330e877c45d3a16e14e2611987c0407b883888843a3668b15adde0a9" Dec 13 04:11:49 crc kubenswrapper[4766]: I1213 04:11:49.208577 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f0b9db1330e877c45d3a16e14e2611987c0407b883888843a3668b15adde0a9"} err="failed to get container status \"8f0b9db1330e877c45d3a16e14e2611987c0407b883888843a3668b15adde0a9\": rpc error: code = NotFound desc = could not find container \"8f0b9db1330e877c45d3a16e14e2611987c0407b883888843a3668b15adde0a9\": container with ID starting with 8f0b9db1330e877c45d3a16e14e2611987c0407b883888843a3668b15adde0a9 not found: ID does not exist" Dec 13 04:11:49 crc kubenswrapper[4766]: I1213 04:11:49.208615 4766 scope.go:117] "RemoveContainer" containerID="c29d5720b2e47f6ca4e60d218df30f63585b4641fcf51016e8cecc44d17f6854" Dec 13 04:11:49 crc kubenswrapper[4766]: E1213 04:11:49.209135 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c29d5720b2e47f6ca4e60d218df30f63585b4641fcf51016e8cecc44d17f6854\": container with ID starting with c29d5720b2e47f6ca4e60d218df30f63585b4641fcf51016e8cecc44d17f6854 not found: ID does not exist" containerID="c29d5720b2e47f6ca4e60d218df30f63585b4641fcf51016e8cecc44d17f6854" Dec 13 04:11:49 crc kubenswrapper[4766]: I1213 04:11:49.209170 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c29d5720b2e47f6ca4e60d218df30f63585b4641fcf51016e8cecc44d17f6854"} err="failed to get container status \"c29d5720b2e47f6ca4e60d218df30f63585b4641fcf51016e8cecc44d17f6854\": rpc error: code = NotFound desc = could not find container \"c29d5720b2e47f6ca4e60d218df30f63585b4641fcf51016e8cecc44d17f6854\": container with ID starting with c29d5720b2e47f6ca4e60d218df30f63585b4641fcf51016e8cecc44d17f6854 not found: ID does not exist" Dec 13 04:11:49 crc kubenswrapper[4766]: I1213 04:11:49.209195 4766 scope.go:117] "RemoveContainer" containerID="83124e814520f1daebdc914ef60ad18e8bdca0a7d66c5b880b552bc19485d608" Dec 13 04:11:49 crc kubenswrapper[4766]: E1213 04:11:49.209692 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83124e814520f1daebdc914ef60ad18e8bdca0a7d66c5b880b552bc19485d608\": container with ID starting with 83124e814520f1daebdc914ef60ad18e8bdca0a7d66c5b880b552bc19485d608 not found: ID does not exist" containerID="83124e814520f1daebdc914ef60ad18e8bdca0a7d66c5b880b552bc19485d608" Dec 13 04:11:49 crc kubenswrapper[4766]: I1213 04:11:49.209739 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83124e814520f1daebdc914ef60ad18e8bdca0a7d66c5b880b552bc19485d608"} err="failed to get container status \"83124e814520f1daebdc914ef60ad18e8bdca0a7d66c5b880b552bc19485d608\": rpc error: code = NotFound desc = could not find container \"83124e814520f1daebdc914ef60ad18e8bdca0a7d66c5b880b552bc19485d608\": container with ID starting with 83124e814520f1daebdc914ef60ad18e8bdca0a7d66c5b880b552bc19485d608 not found: ID does not exist" Dec 13 04:11:49 crc kubenswrapper[4766]: I1213 04:11:49.234503 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgfvv\" (UniqueName: \"kubernetes.io/projected/8bff5155-48a9-4ff1-af39-b85a2d9f62ff-kube-api-access-pgfvv\") pod \"8bff5155-48a9-4ff1-af39-b85a2d9f62ff\" (UID: \"8bff5155-48a9-4ff1-af39-b85a2d9f62ff\") " Dec 13 04:11:49 crc kubenswrapper[4766]: I1213 04:11:49.234577 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bff5155-48a9-4ff1-af39-b85a2d9f62ff-catalog-content\") pod \"8bff5155-48a9-4ff1-af39-b85a2d9f62ff\" (UID: \"8bff5155-48a9-4ff1-af39-b85a2d9f62ff\") " Dec 13 04:11:49 crc kubenswrapper[4766]: I1213 04:11:49.234615 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bff5155-48a9-4ff1-af39-b85a2d9f62ff-utilities\") pod \"8bff5155-48a9-4ff1-af39-b85a2d9f62ff\" (UID: \"8bff5155-48a9-4ff1-af39-b85a2d9f62ff\") " Dec 13 04:11:49 crc kubenswrapper[4766]: I1213 04:11:49.235572 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bff5155-48a9-4ff1-af39-b85a2d9f62ff-utilities" (OuterVolumeSpecName: "utilities") pod "8bff5155-48a9-4ff1-af39-b85a2d9f62ff" (UID: "8bff5155-48a9-4ff1-af39-b85a2d9f62ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:11:49 crc kubenswrapper[4766]: I1213 04:11:49.239940 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bff5155-48a9-4ff1-af39-b85a2d9f62ff-kube-api-access-pgfvv" (OuterVolumeSpecName: "kube-api-access-pgfvv") pod "8bff5155-48a9-4ff1-af39-b85a2d9f62ff" (UID: "8bff5155-48a9-4ff1-af39-b85a2d9f62ff"). InnerVolumeSpecName "kube-api-access-pgfvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:11:49 crc kubenswrapper[4766]: I1213 04:11:49.257866 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bff5155-48a9-4ff1-af39-b85a2d9f62ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8bff5155-48a9-4ff1-af39-b85a2d9f62ff" (UID: "8bff5155-48a9-4ff1-af39-b85a2d9f62ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:11:49 crc kubenswrapper[4766]: I1213 04:11:49.336858 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgfvv\" (UniqueName: \"kubernetes.io/projected/8bff5155-48a9-4ff1-af39-b85a2d9f62ff-kube-api-access-pgfvv\") on node \"crc\" DevicePath \"\"" Dec 13 04:11:49 crc kubenswrapper[4766]: I1213 04:11:49.336896 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bff5155-48a9-4ff1-af39-b85a2d9f62ff-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 04:11:49 crc kubenswrapper[4766]: I1213 04:11:49.336908 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bff5155-48a9-4ff1-af39-b85a2d9f62ff-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 04:11:49 crc kubenswrapper[4766]: I1213 04:11:49.473276 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vp7l"] Dec 13 04:11:49 crc kubenswrapper[4766]: I1213 04:11:49.478865 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vp7l"] Dec 13 04:11:49 crc kubenswrapper[4766]: I1213 04:11:49.638161 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bff5155-48a9-4ff1-af39-b85a2d9f62ff" path="/var/lib/kubelet/pods/8bff5155-48a9-4ff1-af39-b85a2d9f62ff/volumes" Dec 13 04:12:09 crc kubenswrapper[4766]: I1213 04:12:09.732199 4766 patch_prober.go:28] interesting pod/machine-config-daemon-94w9l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 04:12:09 crc kubenswrapper[4766]: I1213 04:12:09.732899 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 04:12:34 crc kubenswrapper[4766]: I1213 04:12:34.862596 4766 scope.go:117] "RemoveContainer" containerID="03dfb66dc33a23e7f51e0e3294f3d4ed1e48650ea4e5df802866e1e3c79092f4" Dec 13 04:12:34 crc kubenswrapper[4766]: I1213 04:12:34.892376 4766 scope.go:117] "RemoveContainer" containerID="5c16391395fcd155be5af5097f62126590fedb412fbb77572304d984b0fe5ec2" Dec 13 04:12:39 crc kubenswrapper[4766]: I1213 04:12:39.732633 4766 patch_prober.go:28] interesting pod/machine-config-daemon-94w9l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 04:12:39 crc kubenswrapper[4766]: I1213 04:12:39.733196 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 04:12:39 crc kubenswrapper[4766]: I1213 04:12:39.733248 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" Dec 13 04:12:39 crc kubenswrapper[4766]: I1213 04:12:39.733989 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"73b359d6042adaed91cafcdce588efdce0b2235b226906aa3955206639853ae2"} pod="openshift-machine-config-operator/machine-config-daemon-94w9l" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 13 04:12:39 crc kubenswrapper[4766]: I1213 04:12:39.734038 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" containerID="cri-o://73b359d6042adaed91cafcdce588efdce0b2235b226906aa3955206639853ae2" gracePeriod=600 Dec 13 04:12:39 crc kubenswrapper[4766]: E1213 04:12:39.856197 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-94w9l_openshift-machine-config-operator(71e6a48b-4f5d-4299-9c7b-98dbe11e670e)\"" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" Dec 13 04:12:40 crc kubenswrapper[4766]: I1213 04:12:40.590268 4766 generic.go:334] "Generic (PLEG): container finished" podID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerID="73b359d6042adaed91cafcdce588efdce0b2235b226906aa3955206639853ae2" exitCode=0 Dec 13 04:12:40 crc kubenswrapper[4766]: I1213 04:12:40.590327 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" event={"ID":"71e6a48b-4f5d-4299-9c7b-98dbe11e670e","Type":"ContainerDied","Data":"73b359d6042adaed91cafcdce588efdce0b2235b226906aa3955206639853ae2"} Dec 13 04:12:40 crc kubenswrapper[4766]: I1213 04:12:40.590363 4766 scope.go:117] "RemoveContainer" containerID="1b1133c0e727136c79aaa5f9da8214a41f9cc9ae7bd7f759a1497d0dcecad697" Dec 13 04:12:40 crc kubenswrapper[4766]: I1213 04:12:40.591119 4766 scope.go:117] "RemoveContainer" containerID="73b359d6042adaed91cafcdce588efdce0b2235b226906aa3955206639853ae2" Dec 13 04:12:40 crc kubenswrapper[4766]: E1213 04:12:40.591336 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-94w9l_openshift-machine-config-operator(71e6a48b-4f5d-4299-9c7b-98dbe11e670e)\"" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" Dec 13 04:12:53 crc kubenswrapper[4766]: I1213 04:12:53.617252 4766 scope.go:117] "RemoveContainer" containerID="73b359d6042adaed91cafcdce588efdce0b2235b226906aa3955206639853ae2" Dec 13 04:12:53 crc kubenswrapper[4766]: E1213 04:12:53.618734 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-94w9l_openshift-machine-config-operator(71e6a48b-4f5d-4299-9c7b-98dbe11e670e)\"" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" Dec 13 04:13:05 crc kubenswrapper[4766]: I1213 04:13:05.617728 4766 scope.go:117] "RemoveContainer" containerID="73b359d6042adaed91cafcdce588efdce0b2235b226906aa3955206639853ae2" Dec 13 04:13:05 crc kubenswrapper[4766]: E1213 04:13:05.618735 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-94w9l_openshift-machine-config-operator(71e6a48b-4f5d-4299-9c7b-98dbe11e670e)\"" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" Dec 13 04:13:19 crc kubenswrapper[4766]: I1213 04:13:19.622817 4766 scope.go:117] "RemoveContainer" containerID="73b359d6042adaed91cafcdce588efdce0b2235b226906aa3955206639853ae2" Dec 13 04:13:19 crc kubenswrapper[4766]: E1213 04:13:19.623840 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-94w9l_openshift-machine-config-operator(71e6a48b-4f5d-4299-9c7b-98dbe11e670e)\"" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" Dec 13 04:13:31 crc kubenswrapper[4766]: I1213 04:13:31.616825 4766 scope.go:117] "RemoveContainer" containerID="73b359d6042adaed91cafcdce588efdce0b2235b226906aa3955206639853ae2" Dec 13 04:13:31 crc kubenswrapper[4766]: E1213 04:13:31.617521 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-94w9l_openshift-machine-config-operator(71e6a48b-4f5d-4299-9c7b-98dbe11e670e)\"" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" Dec 13 04:13:35 crc kubenswrapper[4766]: I1213 04:13:35.004952 4766 scope.go:117] "RemoveContainer" containerID="76d0b99a05fcbeeaad61deae24483bf624e828053d6571f93f2901ef95d74180" Dec 13 04:13:46 crc kubenswrapper[4766]: I1213 04:13:46.616009 4766 scope.go:117] "RemoveContainer" containerID="73b359d6042adaed91cafcdce588efdce0b2235b226906aa3955206639853ae2" Dec 13 04:13:46 crc kubenswrapper[4766]: E1213 04:13:46.616915 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-94w9l_openshift-machine-config-operator(71e6a48b-4f5d-4299-9c7b-98dbe11e670e)\"" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" Dec 13 04:13:58 crc kubenswrapper[4766]: I1213 04:13:58.616899 4766 scope.go:117] "RemoveContainer" containerID="73b359d6042adaed91cafcdce588efdce0b2235b226906aa3955206639853ae2" Dec 13 04:13:58 crc kubenswrapper[4766]: E1213 04:13:58.617548 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-94w9l_openshift-machine-config-operator(71e6a48b-4f5d-4299-9c7b-98dbe11e670e)\"" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" Dec 13 04:14:12 crc kubenswrapper[4766]: I1213 04:14:12.616110 4766 scope.go:117] "RemoveContainer" containerID="73b359d6042adaed91cafcdce588efdce0b2235b226906aa3955206639853ae2" Dec 13 04:14:12 crc kubenswrapper[4766]: E1213 04:14:12.616834 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-94w9l_openshift-machine-config-operator(71e6a48b-4f5d-4299-9c7b-98dbe11e670e)\"" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" Dec 13 04:14:19 crc kubenswrapper[4766]: I1213 04:14:19.050229 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/keystone-db-create-k7gmq"] Dec 13 04:14:19 crc kubenswrapper[4766]: I1213 04:14:19.059964 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/keystone-db-create-k7gmq"] Dec 13 04:14:19 crc kubenswrapper[4766]: I1213 04:14:19.626268 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4108031b-93a3-489e-96a7-796f5c427d68" path="/var/lib/kubelet/pods/4108031b-93a3-489e-96a7-796f5c427d68/volumes" Dec 13 04:14:25 crc kubenswrapper[4766]: I1213 04:14:25.616579 4766 scope.go:117] "RemoveContainer" containerID="73b359d6042adaed91cafcdce588efdce0b2235b226906aa3955206639853ae2" Dec 13 04:14:25 crc kubenswrapper[4766]: E1213 04:14:25.618533 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-94w9l_openshift-machine-config-operator(71e6a48b-4f5d-4299-9c7b-98dbe11e670e)\"" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" Dec 13 04:14:28 crc kubenswrapper[4766]: I1213 04:14:28.042278 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/keystone-ff52-account-create-mx847"] Dec 13 04:14:28 crc kubenswrapper[4766]: I1213 04:14:28.049194 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/keystone-ff52-account-create-mx847"] Dec 13 04:14:29 crc kubenswrapper[4766]: I1213 04:14:29.630557 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d412be4-09f0-40a1-bda1-fafd744e6d9a" path="/var/lib/kubelet/pods/1d412be4-09f0-40a1-bda1-fafd744e6d9a/volumes" Dec 13 04:14:35 crc kubenswrapper[4766]: I1213 04:14:35.107941 4766 scope.go:117] "RemoveContainer" containerID="43f2be0e96e6ab87ca031d32e4077fbc05d070a18e2743051452bce2ee4087be" Dec 13 04:14:35 crc kubenswrapper[4766]: I1213 04:14:35.143314 4766 scope.go:117] "RemoveContainer" containerID="f97744ac9d5e49ce9f571f3a7dd9757d6671f5488a867dd793cb9042c41a3547" Dec 13 04:14:35 crc kubenswrapper[4766]: I1213 04:14:35.166001 4766 scope.go:117] "RemoveContainer" containerID="fd312189f795b21cd7df6f59132fa01a00627d6d78ec1d4e8606a353428c65f9" Dec 13 04:14:36 crc kubenswrapper[4766]: I1213 04:14:36.616458 4766 scope.go:117] "RemoveContainer" containerID="73b359d6042adaed91cafcdce588efdce0b2235b226906aa3955206639853ae2" Dec 13 04:14:36 crc kubenswrapper[4766]: E1213 04:14:36.617022 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-94w9l_openshift-machine-config-operator(71e6a48b-4f5d-4299-9c7b-98dbe11e670e)\"" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" Dec 13 04:14:48 crc kubenswrapper[4766]: I1213 04:14:48.051105 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/keystone-db-sync-7t8dp"] Dec 13 04:14:48 crc kubenswrapper[4766]: I1213 04:14:48.061750 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/keystone-db-sync-7t8dp"] Dec 13 04:14:49 crc kubenswrapper[4766]: I1213 04:14:49.622422 4766 scope.go:117] "RemoveContainer" containerID="73b359d6042adaed91cafcdce588efdce0b2235b226906aa3955206639853ae2" Dec 13 04:14:49 crc kubenswrapper[4766]: E1213 04:14:49.623179 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-94w9l_openshift-machine-config-operator(71e6a48b-4f5d-4299-9c7b-98dbe11e670e)\"" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" Dec 13 04:14:49 crc kubenswrapper[4766]: I1213 04:14:49.624377 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9" path="/var/lib/kubelet/pods/0ab1da30-ad2f-40ab-aff9-3bf5ba8ef0b9/volumes" Dec 13 04:14:58 crc kubenswrapper[4766]: I1213 04:14:58.054455 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/keystone-bootstrap-jghjb"] Dec 13 04:14:58 crc kubenswrapper[4766]: I1213 04:14:58.064535 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/keystone-bootstrap-jghjb"] Dec 13 04:14:59 crc kubenswrapper[4766]: I1213 04:14:59.628341 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02c1be2a-b38f-480c-83a8-2225cd2ed80a" path="/var/lib/kubelet/pods/02c1be2a-b38f-480c-83a8-2225cd2ed80a/volumes" Dec 13 04:15:00 crc kubenswrapper[4766]: I1213 04:15:00.153237 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29426655-dd4d8"] Dec 13 04:15:00 crc kubenswrapper[4766]: E1213 04:15:00.154871 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bff5155-48a9-4ff1-af39-b85a2d9f62ff" containerName="registry-server" Dec 13 04:15:00 crc kubenswrapper[4766]: I1213 04:15:00.154962 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bff5155-48a9-4ff1-af39-b85a2d9f62ff" containerName="registry-server" Dec 13 04:15:00 crc kubenswrapper[4766]: E1213 04:15:00.155117 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bff5155-48a9-4ff1-af39-b85a2d9f62ff" containerName="extract-content" Dec 13 04:15:00 crc kubenswrapper[4766]: I1213 04:15:00.155154 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bff5155-48a9-4ff1-af39-b85a2d9f62ff" containerName="extract-content" Dec 13 04:15:00 crc kubenswrapper[4766]: E1213 04:15:00.155182 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bff5155-48a9-4ff1-af39-b85a2d9f62ff" containerName="extract-utilities" Dec 13 04:15:00 crc kubenswrapper[4766]: I1213 04:15:00.155203 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bff5155-48a9-4ff1-af39-b85a2d9f62ff" containerName="extract-utilities" Dec 13 04:15:00 crc kubenswrapper[4766]: I1213 04:15:00.156016 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bff5155-48a9-4ff1-af39-b85a2d9f62ff" containerName="registry-server" Dec 13 04:15:00 crc kubenswrapper[4766]: I1213 04:15:00.158765 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426655-dd4d8" Dec 13 04:15:00 crc kubenswrapper[4766]: I1213 04:15:00.163082 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 13 04:15:00 crc kubenswrapper[4766]: I1213 04:15:00.163833 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 13 04:15:00 crc kubenswrapper[4766]: I1213 04:15:00.194674 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29426655-dd4d8"] Dec 13 04:15:00 crc kubenswrapper[4766]: I1213 04:15:00.299973 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5305ed3a-38ec-451d-96af-744b4bb7e75d-config-volume\") pod \"collect-profiles-29426655-dd4d8\" (UID: \"5305ed3a-38ec-451d-96af-744b4bb7e75d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426655-dd4d8" Dec 13 04:15:00 crc kubenswrapper[4766]: I1213 04:15:00.300189 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8wrm\" (UniqueName: \"kubernetes.io/projected/5305ed3a-38ec-451d-96af-744b4bb7e75d-kube-api-access-c8wrm\") pod \"collect-profiles-29426655-dd4d8\" (UID: \"5305ed3a-38ec-451d-96af-744b4bb7e75d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426655-dd4d8" Dec 13 04:15:00 crc kubenswrapper[4766]: I1213 04:15:00.300286 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5305ed3a-38ec-451d-96af-744b4bb7e75d-secret-volume\") pod \"collect-profiles-29426655-dd4d8\" (UID: \"5305ed3a-38ec-451d-96af-744b4bb7e75d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426655-dd4d8" Dec 13 04:15:00 crc kubenswrapper[4766]: I1213 04:15:00.401090 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5305ed3a-38ec-451d-96af-744b4bb7e75d-config-volume\") pod \"collect-profiles-29426655-dd4d8\" (UID: \"5305ed3a-38ec-451d-96af-744b4bb7e75d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426655-dd4d8" Dec 13 04:15:00 crc kubenswrapper[4766]: I1213 04:15:00.401169 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8wrm\" (UniqueName: \"kubernetes.io/projected/5305ed3a-38ec-451d-96af-744b4bb7e75d-kube-api-access-c8wrm\") pod \"collect-profiles-29426655-dd4d8\" (UID: \"5305ed3a-38ec-451d-96af-744b4bb7e75d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426655-dd4d8" Dec 13 04:15:00 crc kubenswrapper[4766]: I1213 04:15:00.401209 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5305ed3a-38ec-451d-96af-744b4bb7e75d-secret-volume\") pod \"collect-profiles-29426655-dd4d8\" (UID: \"5305ed3a-38ec-451d-96af-744b4bb7e75d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426655-dd4d8" Dec 13 04:15:00 crc kubenswrapper[4766]: I1213 04:15:00.403494 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5305ed3a-38ec-451d-96af-744b4bb7e75d-config-volume\") pod \"collect-profiles-29426655-dd4d8\" (UID: \"5305ed3a-38ec-451d-96af-744b4bb7e75d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426655-dd4d8" Dec 13 04:15:00 crc kubenswrapper[4766]: I1213 04:15:00.422766 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5305ed3a-38ec-451d-96af-744b4bb7e75d-secret-volume\") pod \"collect-profiles-29426655-dd4d8\" (UID: \"5305ed3a-38ec-451d-96af-744b4bb7e75d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426655-dd4d8" Dec 13 04:15:00 crc kubenswrapper[4766]: I1213 04:15:00.423911 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8wrm\" (UniqueName: \"kubernetes.io/projected/5305ed3a-38ec-451d-96af-744b4bb7e75d-kube-api-access-c8wrm\") pod \"collect-profiles-29426655-dd4d8\" (UID: \"5305ed3a-38ec-451d-96af-744b4bb7e75d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426655-dd4d8" Dec 13 04:15:00 crc kubenswrapper[4766]: I1213 04:15:00.493292 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426655-dd4d8" Dec 13 04:15:00 crc kubenswrapper[4766]: I1213 04:15:00.906296 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29426655-dd4d8"] Dec 13 04:15:01 crc kubenswrapper[4766]: I1213 04:15:01.822525 4766 generic.go:334] "Generic (PLEG): container finished" podID="5305ed3a-38ec-451d-96af-744b4bb7e75d" containerID="1e422e356ff6113e2ce6bddbcd236a6c31f4bc0178c9bfdc2f6c496d4623933f" exitCode=0 Dec 13 04:15:01 crc kubenswrapper[4766]: I1213 04:15:01.822591 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29426655-dd4d8" event={"ID":"5305ed3a-38ec-451d-96af-744b4bb7e75d","Type":"ContainerDied","Data":"1e422e356ff6113e2ce6bddbcd236a6c31f4bc0178c9bfdc2f6c496d4623933f"} Dec 13 04:15:01 crc kubenswrapper[4766]: I1213 04:15:01.823031 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29426655-dd4d8" event={"ID":"5305ed3a-38ec-451d-96af-744b4bb7e75d","Type":"ContainerStarted","Data":"979b931998c0f04dffe2906c268f9e88edb9605568741ae1e5b07dc8dc5270d5"} Dec 13 04:15:03 crc kubenswrapper[4766]: I1213 04:15:03.132173 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426655-dd4d8" Dec 13 04:15:03 crc kubenswrapper[4766]: I1213 04:15:03.285169 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5305ed3a-38ec-451d-96af-744b4bb7e75d-config-volume\") pod \"5305ed3a-38ec-451d-96af-744b4bb7e75d\" (UID: \"5305ed3a-38ec-451d-96af-744b4bb7e75d\") " Dec 13 04:15:03 crc kubenswrapper[4766]: I1213 04:15:03.285448 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8wrm\" (UniqueName: \"kubernetes.io/projected/5305ed3a-38ec-451d-96af-744b4bb7e75d-kube-api-access-c8wrm\") pod \"5305ed3a-38ec-451d-96af-744b4bb7e75d\" (UID: \"5305ed3a-38ec-451d-96af-744b4bb7e75d\") " Dec 13 04:15:03 crc kubenswrapper[4766]: I1213 04:15:03.285485 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5305ed3a-38ec-451d-96af-744b4bb7e75d-secret-volume\") pod \"5305ed3a-38ec-451d-96af-744b4bb7e75d\" (UID: \"5305ed3a-38ec-451d-96af-744b4bb7e75d\") " Dec 13 04:15:03 crc kubenswrapper[4766]: I1213 04:15:03.286374 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5305ed3a-38ec-451d-96af-744b4bb7e75d-config-volume" (OuterVolumeSpecName: "config-volume") pod "5305ed3a-38ec-451d-96af-744b4bb7e75d" (UID: "5305ed3a-38ec-451d-96af-744b4bb7e75d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 04:15:03 crc kubenswrapper[4766]: I1213 04:15:03.297869 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5305ed3a-38ec-451d-96af-744b4bb7e75d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5305ed3a-38ec-451d-96af-744b4bb7e75d" (UID: "5305ed3a-38ec-451d-96af-744b4bb7e75d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:15:03 crc kubenswrapper[4766]: I1213 04:15:03.297980 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5305ed3a-38ec-451d-96af-744b4bb7e75d-kube-api-access-c8wrm" (OuterVolumeSpecName: "kube-api-access-c8wrm") pod "5305ed3a-38ec-451d-96af-744b4bb7e75d" (UID: "5305ed3a-38ec-451d-96af-744b4bb7e75d"). InnerVolumeSpecName "kube-api-access-c8wrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:15:03 crc kubenswrapper[4766]: I1213 04:15:03.387550 4766 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5305ed3a-38ec-451d-96af-744b4bb7e75d-config-volume\") on node \"crc\" DevicePath \"\"" Dec 13 04:15:03 crc kubenswrapper[4766]: I1213 04:15:03.387607 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8wrm\" (UniqueName: \"kubernetes.io/projected/5305ed3a-38ec-451d-96af-744b4bb7e75d-kube-api-access-c8wrm\") on node \"crc\" DevicePath \"\"" Dec 13 04:15:03 crc kubenswrapper[4766]: I1213 04:15:03.387649 4766 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5305ed3a-38ec-451d-96af-744b4bb7e75d-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 13 04:15:03 crc kubenswrapper[4766]: I1213 04:15:03.617814 4766 scope.go:117] "RemoveContainer" containerID="73b359d6042adaed91cafcdce588efdce0b2235b226906aa3955206639853ae2" Dec 13 04:15:03 crc kubenswrapper[4766]: E1213 04:15:03.619323 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-94w9l_openshift-machine-config-operator(71e6a48b-4f5d-4299-9c7b-98dbe11e670e)\"" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" Dec 13 04:15:03 crc kubenswrapper[4766]: I1213 04:15:03.868041 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29426655-dd4d8" event={"ID":"5305ed3a-38ec-451d-96af-744b4bb7e75d","Type":"ContainerDied","Data":"979b931998c0f04dffe2906c268f9e88edb9605568741ae1e5b07dc8dc5270d5"} Dec 13 04:15:03 crc kubenswrapper[4766]: I1213 04:15:03.868275 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="979b931998c0f04dffe2906c268f9e88edb9605568741ae1e5b07dc8dc5270d5" Dec 13 04:15:03 crc kubenswrapper[4766]: I1213 04:15:03.868444 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426655-dd4d8" Dec 13 04:15:09 crc kubenswrapper[4766]: I1213 04:15:09.499147 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/openstackclient"] Dec 13 04:15:09 crc kubenswrapper[4766]: E1213 04:15:09.499951 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5305ed3a-38ec-451d-96af-744b4bb7e75d" containerName="collect-profiles" Dec 13 04:15:09 crc kubenswrapper[4766]: I1213 04:15:09.499967 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="5305ed3a-38ec-451d-96af-744b4bb7e75d" containerName="collect-profiles" Dec 13 04:15:09 crc kubenswrapper[4766]: I1213 04:15:09.500099 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="5305ed3a-38ec-451d-96af-744b4bb7e75d" containerName="collect-profiles" Dec 13 04:15:09 crc kubenswrapper[4766]: I1213 04:15:09.500627 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstackclient" Dec 13 04:15:09 crc kubenswrapper[4766]: I1213 04:15:09.504725 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-config" Dec 13 04:15:09 crc kubenswrapper[4766]: I1213 04:15:09.505292 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-scripts-9db6gc427h" Dec 13 04:15:09 crc kubenswrapper[4766]: I1213 04:15:09.505299 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"openstack-config-secret" Dec 13 04:15:09 crc kubenswrapper[4766]: I1213 04:15:09.505596 4766 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"default-dockercfg-qhkws" Dec 13 04:15:09 crc kubenswrapper[4766]: I1213 04:15:09.534464 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstackclient"] Dec 13 04:15:09 crc kubenswrapper[4766]: I1213 04:15:09.574347 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d0d0d40c-f117-400f-aaec-1e339a7779c1-openstack-config\") pod \"openstackclient\" (UID: \"d0d0d40c-f117-400f-aaec-1e339a7779c1\") " pod="glance-kuttl-tests/openstackclient" Dec 13 04:15:09 crc kubenswrapper[4766]: I1213 04:15:09.574447 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d0d0d40c-f117-400f-aaec-1e339a7779c1-openstack-config-secret\") pod \"openstackclient\" (UID: \"d0d0d40c-f117-400f-aaec-1e339a7779c1\") " pod="glance-kuttl-tests/openstackclient" Dec 13 04:15:09 crc kubenswrapper[4766]: I1213 04:15:09.574530 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkm5j\" (UniqueName: \"kubernetes.io/projected/d0d0d40c-f117-400f-aaec-1e339a7779c1-kube-api-access-rkm5j\") pod \"openstackclient\" (UID: \"d0d0d40c-f117-400f-aaec-1e339a7779c1\") " pod="glance-kuttl-tests/openstackclient" Dec 13 04:15:09 crc kubenswrapper[4766]: I1213 04:15:09.574557 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/d0d0d40c-f117-400f-aaec-1e339a7779c1-openstack-scripts\") pod \"openstackclient\" (UID: \"d0d0d40c-f117-400f-aaec-1e339a7779c1\") " pod="glance-kuttl-tests/openstackclient" Dec 13 04:15:09 crc kubenswrapper[4766]: I1213 04:15:09.675897 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d0d0d40c-f117-400f-aaec-1e339a7779c1-openstack-config\") pod \"openstackclient\" (UID: \"d0d0d40c-f117-400f-aaec-1e339a7779c1\") " pod="glance-kuttl-tests/openstackclient" Dec 13 04:15:09 crc kubenswrapper[4766]: I1213 04:15:09.675951 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d0d0d40c-f117-400f-aaec-1e339a7779c1-openstack-config-secret\") pod \"openstackclient\" (UID: \"d0d0d40c-f117-400f-aaec-1e339a7779c1\") " pod="glance-kuttl-tests/openstackclient" Dec 13 04:15:09 crc kubenswrapper[4766]: I1213 04:15:09.676013 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkm5j\" (UniqueName: \"kubernetes.io/projected/d0d0d40c-f117-400f-aaec-1e339a7779c1-kube-api-access-rkm5j\") pod \"openstackclient\" (UID: \"d0d0d40c-f117-400f-aaec-1e339a7779c1\") " pod="glance-kuttl-tests/openstackclient" Dec 13 04:15:09 crc kubenswrapper[4766]: I1213 04:15:09.676035 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/d0d0d40c-f117-400f-aaec-1e339a7779c1-openstack-scripts\") pod \"openstackclient\" (UID: \"d0d0d40c-f117-400f-aaec-1e339a7779c1\") " pod="glance-kuttl-tests/openstackclient" Dec 13 04:15:09 crc kubenswrapper[4766]: I1213 04:15:09.677302 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/d0d0d40c-f117-400f-aaec-1e339a7779c1-openstack-scripts\") pod \"openstackclient\" (UID: \"d0d0d40c-f117-400f-aaec-1e339a7779c1\") " pod="glance-kuttl-tests/openstackclient" Dec 13 04:15:09 crc kubenswrapper[4766]: I1213 04:15:09.678051 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d0d0d40c-f117-400f-aaec-1e339a7779c1-openstack-config\") pod \"openstackclient\" (UID: \"d0d0d40c-f117-400f-aaec-1e339a7779c1\") " pod="glance-kuttl-tests/openstackclient" Dec 13 04:15:09 crc kubenswrapper[4766]: I1213 04:15:09.685779 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d0d0d40c-f117-400f-aaec-1e339a7779c1-openstack-config-secret\") pod \"openstackclient\" (UID: \"d0d0d40c-f117-400f-aaec-1e339a7779c1\") " pod="glance-kuttl-tests/openstackclient" Dec 13 04:15:09 crc kubenswrapper[4766]: I1213 04:15:09.701280 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkm5j\" (UniqueName: \"kubernetes.io/projected/d0d0d40c-f117-400f-aaec-1e339a7779c1-kube-api-access-rkm5j\") pod \"openstackclient\" (UID: \"d0d0d40c-f117-400f-aaec-1e339a7779c1\") " pod="glance-kuttl-tests/openstackclient" Dec 13 04:15:09 crc kubenswrapper[4766]: I1213 04:15:09.828029 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstackclient" Dec 13 04:15:10 crc kubenswrapper[4766]: I1213 04:15:10.136291 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstackclient"] Dec 13 04:15:10 crc kubenswrapper[4766]: I1213 04:15:10.934322 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstackclient" event={"ID":"d0d0d40c-f117-400f-aaec-1e339a7779c1","Type":"ContainerStarted","Data":"b77ce3563defdaf52545535ea760270ccbd8946a811eaab806e9bd8451a9330e"} Dec 13 04:15:10 crc kubenswrapper[4766]: I1213 04:15:10.934987 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstackclient" event={"ID":"d0d0d40c-f117-400f-aaec-1e339a7779c1","Type":"ContainerStarted","Data":"7d48ef06662b836a697e41545fb1b78be313709e2442b83b81bd116b0314cf8f"} Dec 13 04:15:10 crc kubenswrapper[4766]: I1213 04:15:10.956473 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/openstackclient" podStartSLOduration=1.9564435009999999 podStartE2EDuration="1.956443501s" podCreationTimestamp="2025-12-13 04:15:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 04:15:10.954117566 +0000 UTC m=+1842.464050570" watchObservedRunningTime="2025-12-13 04:15:10.956443501 +0000 UTC m=+1842.466376465" Dec 13 04:15:17 crc kubenswrapper[4766]: I1213 04:15:17.615994 4766 scope.go:117] "RemoveContainer" containerID="73b359d6042adaed91cafcdce588efdce0b2235b226906aa3955206639853ae2" Dec 13 04:15:17 crc kubenswrapper[4766]: E1213 04:15:17.616694 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-94w9l_openshift-machine-config-operator(71e6a48b-4f5d-4299-9c7b-98dbe11e670e)\"" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" Dec 13 04:15:29 crc kubenswrapper[4766]: I1213 04:15:29.621353 4766 scope.go:117] "RemoveContainer" containerID="73b359d6042adaed91cafcdce588efdce0b2235b226906aa3955206639853ae2" Dec 13 04:15:29 crc kubenswrapper[4766]: E1213 04:15:29.621971 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-94w9l_openshift-machine-config-operator(71e6a48b-4f5d-4299-9c7b-98dbe11e670e)\"" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" Dec 13 04:15:35 crc kubenswrapper[4766]: I1213 04:15:35.255992 4766 scope.go:117] "RemoveContainer" containerID="628c79df45d943b9d96ca41ca0297013ba36bff3fe20e5c04e541b32fee7d58d" Dec 13 04:15:35 crc kubenswrapper[4766]: I1213 04:15:35.315209 4766 scope.go:117] "RemoveContainer" containerID="d09165a3bc2c2ad1341a5eaf05a4a0c95e3bb4a64bd05401aca14b8d58758556" Dec 13 04:15:42 crc kubenswrapper[4766]: I1213 04:15:42.616862 4766 scope.go:117] "RemoveContainer" containerID="73b359d6042adaed91cafcdce588efdce0b2235b226906aa3955206639853ae2" Dec 13 04:15:42 crc kubenswrapper[4766]: E1213 04:15:42.617892 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-94w9l_openshift-machine-config-operator(71e6a48b-4f5d-4299-9c7b-98dbe11e670e)\"" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" Dec 13 04:15:54 crc kubenswrapper[4766]: I1213 04:15:54.617240 4766 scope.go:117] "RemoveContainer" containerID="73b359d6042adaed91cafcdce588efdce0b2235b226906aa3955206639853ae2" Dec 13 04:15:54 crc kubenswrapper[4766]: E1213 04:15:54.619207 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-94w9l_openshift-machine-config-operator(71e6a48b-4f5d-4299-9c7b-98dbe11e670e)\"" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" Dec 13 04:16:07 crc kubenswrapper[4766]: I1213 04:16:07.617267 4766 scope.go:117] "RemoveContainer" containerID="73b359d6042adaed91cafcdce588efdce0b2235b226906aa3955206639853ae2" Dec 13 04:16:07 crc kubenswrapper[4766]: E1213 04:16:07.618291 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-94w9l_openshift-machine-config-operator(71e6a48b-4f5d-4299-9c7b-98dbe11e670e)\"" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" Dec 13 04:16:21 crc kubenswrapper[4766]: I1213 04:16:21.616040 4766 scope.go:117] "RemoveContainer" containerID="73b359d6042adaed91cafcdce588efdce0b2235b226906aa3955206639853ae2" Dec 13 04:16:21 crc kubenswrapper[4766]: E1213 04:16:21.616869 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-94w9l_openshift-machine-config-operator(71e6a48b-4f5d-4299-9c7b-98dbe11e670e)\"" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" Dec 13 04:16:34 crc kubenswrapper[4766]: I1213 04:16:34.616554 4766 scope.go:117] "RemoveContainer" containerID="73b359d6042adaed91cafcdce588efdce0b2235b226906aa3955206639853ae2" Dec 13 04:16:34 crc kubenswrapper[4766]: E1213 04:16:34.617558 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-94w9l_openshift-machine-config-operator(71e6a48b-4f5d-4299-9c7b-98dbe11e670e)\"" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" Dec 13 04:16:43 crc kubenswrapper[4766]: I1213 04:16:43.199940 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-shstf/must-gather-5lz58"] Dec 13 04:16:43 crc kubenswrapper[4766]: I1213 04:16:43.204077 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-shstf/must-gather-5lz58" Dec 13 04:16:43 crc kubenswrapper[4766]: I1213 04:16:43.206244 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-shstf"/"openshift-service-ca.crt" Dec 13 04:16:43 crc kubenswrapper[4766]: I1213 04:16:43.206315 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-shstf"/"default-dockercfg-nx2jr" Dec 13 04:16:43 crc kubenswrapper[4766]: I1213 04:16:43.207013 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-shstf"/"kube-root-ca.crt" Dec 13 04:16:43 crc kubenswrapper[4766]: I1213 04:16:43.257027 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-shstf/must-gather-5lz58"] Dec 13 04:16:43 crc kubenswrapper[4766]: I1213 04:16:43.301060 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d8df5171-edb0-4f50-81a5-4f0c8958cd7b-must-gather-output\") pod \"must-gather-5lz58\" (UID: \"d8df5171-edb0-4f50-81a5-4f0c8958cd7b\") " pod="openshift-must-gather-shstf/must-gather-5lz58" Dec 13 04:16:43 crc kubenswrapper[4766]: I1213 04:16:43.301653 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb9vj\" (UniqueName: \"kubernetes.io/projected/d8df5171-edb0-4f50-81a5-4f0c8958cd7b-kube-api-access-rb9vj\") pod \"must-gather-5lz58\" (UID: \"d8df5171-edb0-4f50-81a5-4f0c8958cd7b\") " pod="openshift-must-gather-shstf/must-gather-5lz58" Dec 13 04:16:43 crc kubenswrapper[4766]: I1213 04:16:43.403384 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d8df5171-edb0-4f50-81a5-4f0c8958cd7b-must-gather-output\") pod \"must-gather-5lz58\" (UID: \"d8df5171-edb0-4f50-81a5-4f0c8958cd7b\") " pod="openshift-must-gather-shstf/must-gather-5lz58" Dec 13 04:16:43 crc kubenswrapper[4766]: I1213 04:16:43.403545 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb9vj\" (UniqueName: \"kubernetes.io/projected/d8df5171-edb0-4f50-81a5-4f0c8958cd7b-kube-api-access-rb9vj\") pod \"must-gather-5lz58\" (UID: \"d8df5171-edb0-4f50-81a5-4f0c8958cd7b\") " pod="openshift-must-gather-shstf/must-gather-5lz58" Dec 13 04:16:43 crc kubenswrapper[4766]: I1213 04:16:43.403891 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d8df5171-edb0-4f50-81a5-4f0c8958cd7b-must-gather-output\") pod \"must-gather-5lz58\" (UID: \"d8df5171-edb0-4f50-81a5-4f0c8958cd7b\") " pod="openshift-must-gather-shstf/must-gather-5lz58" Dec 13 04:16:43 crc kubenswrapper[4766]: I1213 04:16:43.428420 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb9vj\" (UniqueName: \"kubernetes.io/projected/d8df5171-edb0-4f50-81a5-4f0c8958cd7b-kube-api-access-rb9vj\") pod \"must-gather-5lz58\" (UID: \"d8df5171-edb0-4f50-81a5-4f0c8958cd7b\") " pod="openshift-must-gather-shstf/must-gather-5lz58" Dec 13 04:16:43 crc kubenswrapper[4766]: I1213 04:16:43.523622 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-shstf/must-gather-5lz58" Dec 13 04:16:44 crc kubenswrapper[4766]: I1213 04:16:44.001169 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 13 04:16:44 crc kubenswrapper[4766]: I1213 04:16:44.005888 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-shstf/must-gather-5lz58"] Dec 13 04:16:44 crc kubenswrapper[4766]: I1213 04:16:44.969087 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-shstf/must-gather-5lz58" event={"ID":"d8df5171-edb0-4f50-81a5-4f0c8958cd7b","Type":"ContainerStarted","Data":"69a343ad60374fd8958400625c244d2935aa37c05b2bb22f308174d0dc8f6281"} Dec 13 04:16:46 crc kubenswrapper[4766]: I1213 04:16:46.618032 4766 scope.go:117] "RemoveContainer" containerID="73b359d6042adaed91cafcdce588efdce0b2235b226906aa3955206639853ae2" Dec 13 04:16:46 crc kubenswrapper[4766]: E1213 04:16:46.618325 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-94w9l_openshift-machine-config-operator(71e6a48b-4f5d-4299-9c7b-98dbe11e670e)\"" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" Dec 13 04:16:50 crc kubenswrapper[4766]: I1213 04:16:50.016053 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-shstf/must-gather-5lz58" event={"ID":"d8df5171-edb0-4f50-81a5-4f0c8958cd7b","Type":"ContainerStarted","Data":"de51d9807ed1e8af9fecf227810e255e110eb0229e58c3e2584c2bbdf2d2ea61"} Dec 13 04:16:50 crc kubenswrapper[4766]: I1213 04:16:50.016721 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-shstf/must-gather-5lz58" event={"ID":"d8df5171-edb0-4f50-81a5-4f0c8958cd7b","Type":"ContainerStarted","Data":"7231bcfecea81466ba110670ac091467430bc534f4f12ad4f80c7c93b93bf9c8"} Dec 13 04:16:50 crc kubenswrapper[4766]: I1213 04:16:50.037167 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-shstf/must-gather-5lz58" podStartSLOduration=1.940096788 podStartE2EDuration="7.037146746s" podCreationTimestamp="2025-12-13 04:16:43 +0000 UTC" firstStartedPulling="2025-12-13 04:16:44.00091144 +0000 UTC m=+1935.510844404" lastFinishedPulling="2025-12-13 04:16:49.097961378 +0000 UTC m=+1940.607894362" observedRunningTime="2025-12-13 04:16:50.031797758 +0000 UTC m=+1941.541730722" watchObservedRunningTime="2025-12-13 04:16:50.037146746 +0000 UTC m=+1941.547079710" Dec 13 04:17:00 crc kubenswrapper[4766]: I1213 04:17:00.616206 4766 scope.go:117] "RemoveContainer" containerID="73b359d6042adaed91cafcdce588efdce0b2235b226906aa3955206639853ae2" Dec 13 04:17:00 crc kubenswrapper[4766]: E1213 04:17:00.617051 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-94w9l_openshift-machine-config-operator(71e6a48b-4f5d-4299-9c7b-98dbe11e670e)\"" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" Dec 13 04:17:13 crc kubenswrapper[4766]: I1213 04:17:13.616314 4766 scope.go:117] "RemoveContainer" containerID="73b359d6042adaed91cafcdce588efdce0b2235b226906aa3955206639853ae2" Dec 13 04:17:13 crc kubenswrapper[4766]: E1213 04:17:13.617318 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-94w9l_openshift-machine-config-operator(71e6a48b-4f5d-4299-9c7b-98dbe11e670e)\"" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" Dec 13 04:17:25 crc kubenswrapper[4766]: I1213 04:17:25.616350 4766 scope.go:117] "RemoveContainer" containerID="73b359d6042adaed91cafcdce588efdce0b2235b226906aa3955206639853ae2" Dec 13 04:17:25 crc kubenswrapper[4766]: E1213 04:17:25.617077 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-94w9l_openshift-machine-config-operator(71e6a48b-4f5d-4299-9c7b-98dbe11e670e)\"" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" Dec 13 04:17:26 crc kubenswrapper[4766]: I1213 04:17:26.918309 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2_f510c8f4-ae74-4fce-af53-a743bdb3e2d3/util/0.log" Dec 13 04:17:27 crc kubenswrapper[4766]: I1213 04:17:27.180042 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2_f510c8f4-ae74-4fce-af53-a743bdb3e2d3/pull/0.log" Dec 13 04:17:27 crc kubenswrapper[4766]: I1213 04:17:27.187618 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2_f510c8f4-ae74-4fce-af53-a743bdb3e2d3/util/0.log" Dec 13 04:17:27 crc kubenswrapper[4766]: I1213 04:17:27.188299 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2_f510c8f4-ae74-4fce-af53-a743bdb3e2d3/pull/0.log" Dec 13 04:17:27 crc kubenswrapper[4766]: I1213 04:17:27.357440 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2_f510c8f4-ae74-4fce-af53-a743bdb3e2d3/util/0.log" Dec 13 04:17:27 crc kubenswrapper[4766]: I1213 04:17:27.361235 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2_f510c8f4-ae74-4fce-af53-a743bdb3e2d3/pull/0.log" Dec 13 04:17:27 crc kubenswrapper[4766]: I1213 04:17:27.403194 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_154061c201d9a2bcd4b28fdc78f598992fc44dcea707fc8b60e8ae7144wq6g2_f510c8f4-ae74-4fce-af53-a743bdb3e2d3/extract/0.log" Dec 13 04:17:27 crc kubenswrapper[4766]: I1213 04:17:27.530727 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv_4abca1b2-000f-4ba5-8f32-012dafeb6043/util/0.log" Dec 13 04:17:27 crc kubenswrapper[4766]: I1213 04:17:27.727751 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv_4abca1b2-000f-4ba5-8f32-012dafeb6043/pull/0.log" Dec 13 04:17:27 crc kubenswrapper[4766]: I1213 04:17:27.736391 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv_4abca1b2-000f-4ba5-8f32-012dafeb6043/util/0.log" Dec 13 04:17:27 crc kubenswrapper[4766]: I1213 04:17:27.752760 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv_4abca1b2-000f-4ba5-8f32-012dafeb6043/pull/0.log" Dec 13 04:17:27 crc kubenswrapper[4766]: I1213 04:17:27.961220 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv_4abca1b2-000f-4ba5-8f32-012dafeb6043/extract/0.log" Dec 13 04:17:27 crc kubenswrapper[4766]: I1213 04:17:27.973636 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv_4abca1b2-000f-4ba5-8f32-012dafeb6043/pull/0.log" Dec 13 04:17:27 crc kubenswrapper[4766]: I1213 04:17:27.986074 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_353a5f0b08f804b0bcab9c666e5e842bac2b504bd58d684e6a455cd53adhbdv_4abca1b2-000f-4ba5-8f32-012dafeb6043/util/0.log" Dec 13 04:17:28 crc kubenswrapper[4766]: I1213 04:17:28.154729 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82_bae6116c-1f11-4b23-826c-90264615b3ca/util/0.log" Dec 13 04:17:28 crc kubenswrapper[4766]: I1213 04:17:28.334321 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82_bae6116c-1f11-4b23-826c-90264615b3ca/pull/0.log" Dec 13 04:17:28 crc kubenswrapper[4766]: I1213 04:17:28.337149 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82_bae6116c-1f11-4b23-826c-90264615b3ca/util/0.log" Dec 13 04:17:28 crc kubenswrapper[4766]: I1213 04:17:28.346188 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82_bae6116c-1f11-4b23-826c-90264615b3ca/pull/0.log" Dec 13 04:17:28 crc kubenswrapper[4766]: I1213 04:17:28.589776 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82_bae6116c-1f11-4b23-826c-90264615b3ca/util/0.log" Dec 13 04:17:28 crc kubenswrapper[4766]: I1213 04:17:28.609359 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82_bae6116c-1f11-4b23-826c-90264615b3ca/extract/0.log" Dec 13 04:17:28 crc kubenswrapper[4766]: I1213 04:17:28.625729 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_798531b9d9a078af26f5f153dd8093f0980ac32bb052a41050c010ef748kv82_bae6116c-1f11-4b23-826c-90264615b3ca/pull/0.log" Dec 13 04:17:28 crc kubenswrapper[4766]: I1213 04:17:28.770616 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9_c7f3ff78-d2df-409f-96f8-6c92d67ba29f/util/0.log" Dec 13 04:17:29 crc kubenswrapper[4766]: I1213 04:17:29.107028 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9_c7f3ff78-d2df-409f-96f8-6c92d67ba29f/util/0.log" Dec 13 04:17:29 crc kubenswrapper[4766]: I1213 04:17:29.118743 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9_c7f3ff78-d2df-409f-96f8-6c92d67ba29f/pull/0.log" Dec 13 04:17:29 crc kubenswrapper[4766]: I1213 04:17:29.134203 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9_c7f3ff78-d2df-409f-96f8-6c92d67ba29f/pull/0.log" Dec 13 04:17:29 crc kubenswrapper[4766]: I1213 04:17:29.281617 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9_c7f3ff78-d2df-409f-96f8-6c92d67ba29f/util/0.log" Dec 13 04:17:29 crc kubenswrapper[4766]: I1213 04:17:29.301177 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9_c7f3ff78-d2df-409f-96f8-6c92d67ba29f/pull/0.log" Dec 13 04:17:29 crc kubenswrapper[4766]: I1213 04:17:29.350539 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rtlw9_c7f3ff78-d2df-409f-96f8-6c92d67ba29f/extract/0.log" Dec 13 04:17:29 crc kubenswrapper[4766]: I1213 04:17:29.455041 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq_0b23e5f0-5ae8-4925-a71c-8c78b43b7814/util/0.log" Dec 13 04:17:29 crc kubenswrapper[4766]: I1213 04:17:29.652523 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq_0b23e5f0-5ae8-4925-a71c-8c78b43b7814/util/0.log" Dec 13 04:17:29 crc kubenswrapper[4766]: I1213 04:17:29.678174 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq_0b23e5f0-5ae8-4925-a71c-8c78b43b7814/pull/0.log" Dec 13 04:17:29 crc kubenswrapper[4766]: I1213 04:17:29.681613 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq_0b23e5f0-5ae8-4925-a71c-8c78b43b7814/pull/0.log" Dec 13 04:17:29 crc kubenswrapper[4766]: I1213 04:17:29.888296 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq_0b23e5f0-5ae8-4925-a71c-8c78b43b7814/pull/0.log" Dec 13 04:17:29 crc kubenswrapper[4766]: I1213 04:17:29.907326 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq_0b23e5f0-5ae8-4925-a71c-8c78b43b7814/extract/0.log" Dec 13 04:17:29 crc kubenswrapper[4766]: I1213 04:17:29.915064 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a1916e853cc7b15de58e1b135fc3d4209d9752d32839650491e625e07fnbjjq_0b23e5f0-5ae8-4925-a71c-8c78b43b7814/util/0.log" Dec 13 04:17:30 crc kubenswrapper[4766]: I1213 04:17:30.079522 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc_dcd5d2af-af90-4a46-9fe2-a70f544e7d66/util/0.log" Dec 13 04:17:30 crc kubenswrapper[4766]: I1213 04:17:30.255513 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc_dcd5d2af-af90-4a46-9fe2-a70f544e7d66/util/0.log" Dec 13 04:17:30 crc kubenswrapper[4766]: I1213 04:17:30.259080 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc_dcd5d2af-af90-4a46-9fe2-a70f544e7d66/pull/0.log" Dec 13 04:17:30 crc kubenswrapper[4766]: I1213 04:17:30.269043 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc_dcd5d2af-af90-4a46-9fe2-a70f544e7d66/pull/0.log" Dec 13 04:17:30 crc kubenswrapper[4766]: I1213 04:17:30.494014 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc_dcd5d2af-af90-4a46-9fe2-a70f544e7d66/extract/0.log" Dec 13 04:17:30 crc kubenswrapper[4766]: I1213 04:17:30.497501 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc_dcd5d2af-af90-4a46-9fe2-a70f544e7d66/pull/0.log" Dec 13 04:17:30 crc kubenswrapper[4766]: I1213 04:17:30.502334 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f_5d9c93c6-fb77-4340-a91e-8c70448ddbf8/util/0.log" Dec 13 04:17:30 crc kubenswrapper[4766]: I1213 04:17:30.509392 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c03115aeb6cae4480b1000a58090a522fd8798c774d18bf66e63c4d3d1t6zcc_dcd5d2af-af90-4a46-9fe2-a70f544e7d66/util/0.log" Dec 13 04:17:30 crc kubenswrapper[4766]: I1213 04:17:30.683527 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f_5d9c93c6-fb77-4340-a91e-8c70448ddbf8/util/0.log" Dec 13 04:17:30 crc kubenswrapper[4766]: I1213 04:17:30.707233 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f_5d9c93c6-fb77-4340-a91e-8c70448ddbf8/pull/0.log" Dec 13 04:17:30 crc kubenswrapper[4766]: I1213 04:17:30.712271 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f_5d9c93c6-fb77-4340-a91e-8c70448ddbf8/pull/0.log" Dec 13 04:17:30 crc kubenswrapper[4766]: I1213 04:17:30.890248 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f_5d9c93c6-fb77-4340-a91e-8c70448ddbf8/util/0.log" Dec 13 04:17:30 crc kubenswrapper[4766]: I1213 04:17:30.896348 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f_5d9c93c6-fb77-4340-a91e-8c70448ddbf8/pull/0.log" Dec 13 04:17:30 crc kubenswrapper[4766]: I1213 04:17:30.936267 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-647f8b4768-2nnpl_35a443bc-4590-440c-976b-4fce0f7a4467/kube-rbac-proxy/0.log" Dec 13 04:17:31 crc kubenswrapper[4766]: I1213 04:17:31.051021 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ecda4f68b43f97a73ba5ef60921e2829255d35d4b507f90299f8458d63cmq8f_5d9c93c6-fb77-4340-a91e-8c70448ddbf8/extract/0.log" Dec 13 04:17:31 crc kubenswrapper[4766]: I1213 04:17:31.155057 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-647f8b4768-2nnpl_35a443bc-4590-440c-976b-4fce0f7a4467/manager/0.log" Dec 13 04:17:31 crc kubenswrapper[4766]: I1213 04:17:31.175588 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-index-zsl6f_2b382d40-1957-43a4-82bf-8af8648a5857/registry-server/0.log" Dec 13 04:17:31 crc kubenswrapper[4766]: I1213 04:17:31.288407 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-f7949f797-6ctkv_7ed09af0-6061-4c7d-9c33-82e4c82a24ac/kube-rbac-proxy/0.log" Dec 13 04:17:31 crc kubenswrapper[4766]: I1213 04:17:31.395092 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-f7949f797-6ctkv_7ed09af0-6061-4c7d-9c33-82e4c82a24ac/manager/0.log" Dec 13 04:17:31 crc kubenswrapper[4766]: I1213 04:17:31.422684 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-index-6lbrw_610a5f6d-2863-42d4-9a95-024335d79560/registry-server/0.log" Dec 13 04:17:31 crc kubenswrapper[4766]: I1213 04:17:31.487791 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-8cbcb47f-6sh6l_7de0f18b-5607-4711-ace6-6a841491a182/kube-rbac-proxy/0.log" Dec 13 04:17:31 crc kubenswrapper[4766]: I1213 04:17:31.629795 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-8cbcb47f-6sh6l_7de0f18b-5607-4711-ace6-6a841491a182/manager/0.log" Dec 13 04:17:31 crc kubenswrapper[4766]: I1213 04:17:31.668487 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-index-q9rsm_4ea27557-d042-4498-b233-1c60ca372f49/registry-server/0.log" Dec 13 04:17:31 crc kubenswrapper[4766]: I1213 04:17:31.744763 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-59f97d4bd8-pqmhg_6b50297f-eb50-4348-8a02-05ed4e3c2d61/kube-rbac-proxy/0.log" Dec 13 04:17:31 crc kubenswrapper[4766]: I1213 04:17:31.858848 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-59f97d4bd8-pqmhg_6b50297f-eb50-4348-8a02-05ed4e3c2d61/manager/0.log" Dec 13 04:17:31 crc kubenswrapper[4766]: I1213 04:17:31.892991 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-index-hn8rh_002ad17d-8277-4767-824e-93b28859e781/registry-server/0.log" Dec 13 04:17:32 crc kubenswrapper[4766]: I1213 04:17:32.043295 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c86c5fb9b-hp997_da4aa2bc-b65f-4188-8b6c-f89f57f8abec/kube-rbac-proxy/0.log" Dec 13 04:17:32 crc kubenswrapper[4766]: I1213 04:17:32.117869 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c86c5fb9b-hp997_da4aa2bc-b65f-4188-8b6c-f89f57f8abec/manager/0.log" Dec 13 04:17:32 crc kubenswrapper[4766]: I1213 04:17:32.176085 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-index-hvxs7_ab67c713-8fff-41ec-bd15-da3ea9d58068/registry-server/0.log" Dec 13 04:17:32 crc kubenswrapper[4766]: I1213 04:17:32.267918 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-779fc9694b-cchkx_ec9e8dc9-892c-4716-9552-a94ae6996b2b/operator/0.log" Dec 13 04:17:32 crc kubenswrapper[4766]: I1213 04:17:32.402934 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-index-b8rfq_c7ffb6ae-a1fa-4f11-b908-9e42901c8c25/registry-server/0.log" Dec 13 04:17:32 crc kubenswrapper[4766]: I1213 04:17:32.404397 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5c69cff4d6-rrx82_da849bf2-7205-4eb8-b526-37115b6632de/kube-rbac-proxy/0.log" Dec 13 04:17:32 crc kubenswrapper[4766]: I1213 04:17:32.472013 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5c69cff4d6-rrx82_da849bf2-7205-4eb8-b526-37115b6632de/manager/0.log" Dec 13 04:17:32 crc kubenswrapper[4766]: I1213 04:17:32.628103 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-index-x6k42_783f433d-69d6-4c52-8884-96bc07460269/registry-server/0.log" Dec 13 04:17:37 crc kubenswrapper[4766]: I1213 04:17:37.471367 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dmxkt"] Dec 13 04:17:37 crc kubenswrapper[4766]: I1213 04:17:37.473605 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dmxkt" Dec 13 04:17:37 crc kubenswrapper[4766]: I1213 04:17:37.503649 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dmxkt"] Dec 13 04:17:37 crc kubenswrapper[4766]: I1213 04:17:37.560691 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79wxb\" (UniqueName: \"kubernetes.io/projected/238f4260-a6bb-449e-b84e-305af303fd34-kube-api-access-79wxb\") pod \"redhat-operators-dmxkt\" (UID: \"238f4260-a6bb-449e-b84e-305af303fd34\") " pod="openshift-marketplace/redhat-operators-dmxkt" Dec 13 04:17:37 crc kubenswrapper[4766]: I1213 04:17:37.560751 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/238f4260-a6bb-449e-b84e-305af303fd34-catalog-content\") pod \"redhat-operators-dmxkt\" (UID: \"238f4260-a6bb-449e-b84e-305af303fd34\") " pod="openshift-marketplace/redhat-operators-dmxkt" Dec 13 04:17:37 crc kubenswrapper[4766]: I1213 04:17:37.560811 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/238f4260-a6bb-449e-b84e-305af303fd34-utilities\") pod \"redhat-operators-dmxkt\" (UID: \"238f4260-a6bb-449e-b84e-305af303fd34\") " pod="openshift-marketplace/redhat-operators-dmxkt" Dec 13 04:17:37 crc kubenswrapper[4766]: I1213 04:17:37.662128 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/238f4260-a6bb-449e-b84e-305af303fd34-utilities\") pod \"redhat-operators-dmxkt\" (UID: \"238f4260-a6bb-449e-b84e-305af303fd34\") " pod="openshift-marketplace/redhat-operators-dmxkt" Dec 13 04:17:37 crc kubenswrapper[4766]: I1213 04:17:37.662245 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79wxb\" (UniqueName: \"kubernetes.io/projected/238f4260-a6bb-449e-b84e-305af303fd34-kube-api-access-79wxb\") pod \"redhat-operators-dmxkt\" (UID: \"238f4260-a6bb-449e-b84e-305af303fd34\") " pod="openshift-marketplace/redhat-operators-dmxkt" Dec 13 04:17:37 crc kubenswrapper[4766]: I1213 04:17:37.662266 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/238f4260-a6bb-449e-b84e-305af303fd34-catalog-content\") pod \"redhat-operators-dmxkt\" (UID: \"238f4260-a6bb-449e-b84e-305af303fd34\") " pod="openshift-marketplace/redhat-operators-dmxkt" Dec 13 04:17:37 crc kubenswrapper[4766]: I1213 04:17:37.662995 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/238f4260-a6bb-449e-b84e-305af303fd34-catalog-content\") pod \"redhat-operators-dmxkt\" (UID: \"238f4260-a6bb-449e-b84e-305af303fd34\") " pod="openshift-marketplace/redhat-operators-dmxkt" Dec 13 04:17:37 crc kubenswrapper[4766]: I1213 04:17:37.663241 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/238f4260-a6bb-449e-b84e-305af303fd34-utilities\") pod \"redhat-operators-dmxkt\" (UID: \"238f4260-a6bb-449e-b84e-305af303fd34\") " pod="openshift-marketplace/redhat-operators-dmxkt" Dec 13 04:17:37 crc kubenswrapper[4766]: I1213 04:17:37.682941 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79wxb\" (UniqueName: \"kubernetes.io/projected/238f4260-a6bb-449e-b84e-305af303fd34-kube-api-access-79wxb\") pod \"redhat-operators-dmxkt\" (UID: \"238f4260-a6bb-449e-b84e-305af303fd34\") " pod="openshift-marketplace/redhat-operators-dmxkt" Dec 13 04:17:37 crc kubenswrapper[4766]: I1213 04:17:37.797144 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dmxkt" Dec 13 04:17:38 crc kubenswrapper[4766]: I1213 04:17:38.100021 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dmxkt"] Dec 13 04:17:38 crc kubenswrapper[4766]: I1213 04:17:38.390736 4766 generic.go:334] "Generic (PLEG): container finished" podID="238f4260-a6bb-449e-b84e-305af303fd34" containerID="8c0a2db7eab4cd35f8e35d7632f2fef458ce5a74abe963e52d5a68c26d5a5f96" exitCode=0 Dec 13 04:17:38 crc kubenswrapper[4766]: I1213 04:17:38.390916 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dmxkt" event={"ID":"238f4260-a6bb-449e-b84e-305af303fd34","Type":"ContainerDied","Data":"8c0a2db7eab4cd35f8e35d7632f2fef458ce5a74abe963e52d5a68c26d5a5f96"} Dec 13 04:17:38 crc kubenswrapper[4766]: I1213 04:17:38.391020 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dmxkt" event={"ID":"238f4260-a6bb-449e-b84e-305af303fd34","Type":"ContainerStarted","Data":"a56eeff447d9edead4b4e8450aa79a15edc2dae5fe4b09a6f79168c4008604c7"} Dec 13 04:17:38 crc kubenswrapper[4766]: I1213 04:17:38.616230 4766 scope.go:117] "RemoveContainer" containerID="73b359d6042adaed91cafcdce588efdce0b2235b226906aa3955206639853ae2" Dec 13 04:17:38 crc kubenswrapper[4766]: E1213 04:17:38.617491 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-94w9l_openshift-machine-config-operator(71e6a48b-4f5d-4299-9c7b-98dbe11e670e)\"" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" Dec 13 04:17:39 crc kubenswrapper[4766]: I1213 04:17:39.398943 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dmxkt" event={"ID":"238f4260-a6bb-449e-b84e-305af303fd34","Type":"ContainerStarted","Data":"bd3f7ec4242b0181130bd339ff6a468f1804ca42f284d6adccd25b767db7c445"} Dec 13 04:17:42 crc kubenswrapper[4766]: I1213 04:17:42.421995 4766 generic.go:334] "Generic (PLEG): container finished" podID="238f4260-a6bb-449e-b84e-305af303fd34" containerID="bd3f7ec4242b0181130bd339ff6a468f1804ca42f284d6adccd25b767db7c445" exitCode=0 Dec 13 04:17:42 crc kubenswrapper[4766]: I1213 04:17:42.422064 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dmxkt" event={"ID":"238f4260-a6bb-449e-b84e-305af303fd34","Type":"ContainerDied","Data":"bd3f7ec4242b0181130bd339ff6a468f1804ca42f284d6adccd25b767db7c445"} Dec 13 04:17:44 crc kubenswrapper[4766]: I1213 04:17:44.439843 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dmxkt" event={"ID":"238f4260-a6bb-449e-b84e-305af303fd34","Type":"ContainerStarted","Data":"92842f0aadf6b662766a66c41f9a67643aac8e4ee52c111d8d0034da79fc7cb6"} Dec 13 04:17:44 crc kubenswrapper[4766]: I1213 04:17:44.469561 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dmxkt" podStartSLOduration=2.275618155 podStartE2EDuration="7.469536046s" podCreationTimestamp="2025-12-13 04:17:37 +0000 UTC" firstStartedPulling="2025-12-13 04:17:38.392590498 +0000 UTC m=+1989.902523462" lastFinishedPulling="2025-12-13 04:17:43.586508389 +0000 UTC m=+1995.096441353" observedRunningTime="2025-12-13 04:17:44.460532076 +0000 UTC m=+1995.970465040" watchObservedRunningTime="2025-12-13 04:17:44.469536046 +0000 UTC m=+1995.979469030" Dec 13 04:17:47 crc kubenswrapper[4766]: I1213 04:17:47.797561 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dmxkt" Dec 13 04:17:47 crc kubenswrapper[4766]: I1213 04:17:47.797941 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dmxkt" Dec 13 04:17:48 crc kubenswrapper[4766]: I1213 04:17:48.775111 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-l9g79_c3ad1798-0983-4d00-ae6d-aef7143647f3/control-plane-machine-set-operator/0.log" Dec 13 04:17:48 crc kubenswrapper[4766]: I1213 04:17:48.839471 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dmxkt" podUID="238f4260-a6bb-449e-b84e-305af303fd34" containerName="registry-server" probeResult="failure" output=< Dec 13 04:17:48 crc kubenswrapper[4766]: timeout: failed to connect service ":50051" within 1s Dec 13 04:17:48 crc kubenswrapper[4766]: > Dec 13 04:17:48 crc kubenswrapper[4766]: I1213 04:17:48.938261 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-wrg86_7b288e53-1206-4832-8d01-94dd9d33f9dd/kube-rbac-proxy/0.log" Dec 13 04:17:48 crc kubenswrapper[4766]: I1213 04:17:48.989080 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-wrg86_7b288e53-1206-4832-8d01-94dd9d33f9dd/machine-api-operator/0.log" Dec 13 04:17:53 crc kubenswrapper[4766]: I1213 04:17:53.617156 4766 scope.go:117] "RemoveContainer" containerID="73b359d6042adaed91cafcdce588efdce0b2235b226906aa3955206639853ae2" Dec 13 04:17:54 crc kubenswrapper[4766]: I1213 04:17:54.538510 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" event={"ID":"71e6a48b-4f5d-4299-9c7b-98dbe11e670e","Type":"ContainerStarted","Data":"03150f8860bfe2b1d231473bb9db0604670a18da80382af1a0bad59963242429"} Dec 13 04:17:57 crc kubenswrapper[4766]: I1213 04:17:57.843920 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dmxkt" Dec 13 04:17:57 crc kubenswrapper[4766]: I1213 04:17:57.893211 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dmxkt" Dec 13 04:17:58 crc kubenswrapper[4766]: I1213 04:17:58.082328 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dmxkt"] Dec 13 04:17:59 crc kubenswrapper[4766]: I1213 04:17:59.575197 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dmxkt" podUID="238f4260-a6bb-449e-b84e-305af303fd34" containerName="registry-server" containerID="cri-o://92842f0aadf6b662766a66c41f9a67643aac8e4ee52c111d8d0034da79fc7cb6" gracePeriod=2 Dec 13 04:18:00 crc kubenswrapper[4766]: I1213 04:18:00.023286 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dmxkt" Dec 13 04:18:00 crc kubenswrapper[4766]: I1213 04:18:00.190279 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/238f4260-a6bb-449e-b84e-305af303fd34-utilities\") pod \"238f4260-a6bb-449e-b84e-305af303fd34\" (UID: \"238f4260-a6bb-449e-b84e-305af303fd34\") " Dec 13 04:18:00 crc kubenswrapper[4766]: I1213 04:18:00.190446 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/238f4260-a6bb-449e-b84e-305af303fd34-catalog-content\") pod \"238f4260-a6bb-449e-b84e-305af303fd34\" (UID: \"238f4260-a6bb-449e-b84e-305af303fd34\") " Dec 13 04:18:00 crc kubenswrapper[4766]: I1213 04:18:00.190503 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79wxb\" (UniqueName: \"kubernetes.io/projected/238f4260-a6bb-449e-b84e-305af303fd34-kube-api-access-79wxb\") pod \"238f4260-a6bb-449e-b84e-305af303fd34\" (UID: \"238f4260-a6bb-449e-b84e-305af303fd34\") " Dec 13 04:18:00 crc kubenswrapper[4766]: I1213 04:18:00.191117 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/238f4260-a6bb-449e-b84e-305af303fd34-utilities" (OuterVolumeSpecName: "utilities") pod "238f4260-a6bb-449e-b84e-305af303fd34" (UID: "238f4260-a6bb-449e-b84e-305af303fd34"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:18:00 crc kubenswrapper[4766]: I1213 04:18:00.197457 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/238f4260-a6bb-449e-b84e-305af303fd34-kube-api-access-79wxb" (OuterVolumeSpecName: "kube-api-access-79wxb") pod "238f4260-a6bb-449e-b84e-305af303fd34" (UID: "238f4260-a6bb-449e-b84e-305af303fd34"). InnerVolumeSpecName "kube-api-access-79wxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:18:00 crc kubenswrapper[4766]: I1213 04:18:00.292062 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/238f4260-a6bb-449e-b84e-305af303fd34-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 04:18:00 crc kubenswrapper[4766]: I1213 04:18:00.292115 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79wxb\" (UniqueName: \"kubernetes.io/projected/238f4260-a6bb-449e-b84e-305af303fd34-kube-api-access-79wxb\") on node \"crc\" DevicePath \"\"" Dec 13 04:18:00 crc kubenswrapper[4766]: I1213 04:18:00.310460 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/238f4260-a6bb-449e-b84e-305af303fd34-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "238f4260-a6bb-449e-b84e-305af303fd34" (UID: "238f4260-a6bb-449e-b84e-305af303fd34"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:18:00 crc kubenswrapper[4766]: I1213 04:18:00.393641 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/238f4260-a6bb-449e-b84e-305af303fd34-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 04:18:00 crc kubenswrapper[4766]: I1213 04:18:00.583682 4766 generic.go:334] "Generic (PLEG): container finished" podID="238f4260-a6bb-449e-b84e-305af303fd34" containerID="92842f0aadf6b662766a66c41f9a67643aac8e4ee52c111d8d0034da79fc7cb6" exitCode=0 Dec 13 04:18:00 crc kubenswrapper[4766]: I1213 04:18:00.583730 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dmxkt" event={"ID":"238f4260-a6bb-449e-b84e-305af303fd34","Type":"ContainerDied","Data":"92842f0aadf6b662766a66c41f9a67643aac8e4ee52c111d8d0034da79fc7cb6"} Dec 13 04:18:00 crc kubenswrapper[4766]: I1213 04:18:00.583765 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dmxkt" event={"ID":"238f4260-a6bb-449e-b84e-305af303fd34","Type":"ContainerDied","Data":"a56eeff447d9edead4b4e8450aa79a15edc2dae5fe4b09a6f79168c4008604c7"} Dec 13 04:18:00 crc kubenswrapper[4766]: I1213 04:18:00.583783 4766 scope.go:117] "RemoveContainer" containerID="92842f0aadf6b662766a66c41f9a67643aac8e4ee52c111d8d0034da79fc7cb6" Dec 13 04:18:00 crc kubenswrapper[4766]: I1213 04:18:00.584907 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dmxkt" Dec 13 04:18:00 crc kubenswrapper[4766]: I1213 04:18:00.618846 4766 scope.go:117] "RemoveContainer" containerID="bd3f7ec4242b0181130bd339ff6a468f1804ca42f284d6adccd25b767db7c445" Dec 13 04:18:00 crc kubenswrapper[4766]: I1213 04:18:00.619827 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dmxkt"] Dec 13 04:18:00 crc kubenswrapper[4766]: I1213 04:18:00.626876 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dmxkt"] Dec 13 04:18:00 crc kubenswrapper[4766]: I1213 04:18:00.639842 4766 scope.go:117] "RemoveContainer" containerID="8c0a2db7eab4cd35f8e35d7632f2fef458ce5a74abe963e52d5a68c26d5a5f96" Dec 13 04:18:00 crc kubenswrapper[4766]: I1213 04:18:00.677570 4766 scope.go:117] "RemoveContainer" containerID="92842f0aadf6b662766a66c41f9a67643aac8e4ee52c111d8d0034da79fc7cb6" Dec 13 04:18:00 crc kubenswrapper[4766]: E1213 04:18:00.678129 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92842f0aadf6b662766a66c41f9a67643aac8e4ee52c111d8d0034da79fc7cb6\": container with ID starting with 92842f0aadf6b662766a66c41f9a67643aac8e4ee52c111d8d0034da79fc7cb6 not found: ID does not exist" containerID="92842f0aadf6b662766a66c41f9a67643aac8e4ee52c111d8d0034da79fc7cb6" Dec 13 04:18:00 crc kubenswrapper[4766]: I1213 04:18:00.678166 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92842f0aadf6b662766a66c41f9a67643aac8e4ee52c111d8d0034da79fc7cb6"} err="failed to get container status \"92842f0aadf6b662766a66c41f9a67643aac8e4ee52c111d8d0034da79fc7cb6\": rpc error: code = NotFound desc = could not find container \"92842f0aadf6b662766a66c41f9a67643aac8e4ee52c111d8d0034da79fc7cb6\": container with ID starting with 92842f0aadf6b662766a66c41f9a67643aac8e4ee52c111d8d0034da79fc7cb6 not found: ID does not exist" Dec 13 04:18:00 crc kubenswrapper[4766]: I1213 04:18:00.678191 4766 scope.go:117] "RemoveContainer" containerID="bd3f7ec4242b0181130bd339ff6a468f1804ca42f284d6adccd25b767db7c445" Dec 13 04:18:00 crc kubenswrapper[4766]: E1213 04:18:00.678559 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd3f7ec4242b0181130bd339ff6a468f1804ca42f284d6adccd25b767db7c445\": container with ID starting with bd3f7ec4242b0181130bd339ff6a468f1804ca42f284d6adccd25b767db7c445 not found: ID does not exist" containerID="bd3f7ec4242b0181130bd339ff6a468f1804ca42f284d6adccd25b767db7c445" Dec 13 04:18:00 crc kubenswrapper[4766]: I1213 04:18:00.678577 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd3f7ec4242b0181130bd339ff6a468f1804ca42f284d6adccd25b767db7c445"} err="failed to get container status \"bd3f7ec4242b0181130bd339ff6a468f1804ca42f284d6adccd25b767db7c445\": rpc error: code = NotFound desc = could not find container \"bd3f7ec4242b0181130bd339ff6a468f1804ca42f284d6adccd25b767db7c445\": container with ID starting with bd3f7ec4242b0181130bd339ff6a468f1804ca42f284d6adccd25b767db7c445 not found: ID does not exist" Dec 13 04:18:00 crc kubenswrapper[4766]: I1213 04:18:00.678591 4766 scope.go:117] "RemoveContainer" containerID="8c0a2db7eab4cd35f8e35d7632f2fef458ce5a74abe963e52d5a68c26d5a5f96" Dec 13 04:18:00 crc kubenswrapper[4766]: E1213 04:18:00.678843 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c0a2db7eab4cd35f8e35d7632f2fef458ce5a74abe963e52d5a68c26d5a5f96\": container with ID starting with 8c0a2db7eab4cd35f8e35d7632f2fef458ce5a74abe963e52d5a68c26d5a5f96 not found: ID does not exist" containerID="8c0a2db7eab4cd35f8e35d7632f2fef458ce5a74abe963e52d5a68c26d5a5f96" Dec 13 04:18:00 crc kubenswrapper[4766]: I1213 04:18:00.678880 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c0a2db7eab4cd35f8e35d7632f2fef458ce5a74abe963e52d5a68c26d5a5f96"} err="failed to get container status \"8c0a2db7eab4cd35f8e35d7632f2fef458ce5a74abe963e52d5a68c26d5a5f96\": rpc error: code = NotFound desc = could not find container \"8c0a2db7eab4cd35f8e35d7632f2fef458ce5a74abe963e52d5a68c26d5a5f96\": container with ID starting with 8c0a2db7eab4cd35f8e35d7632f2fef458ce5a74abe963e52d5a68c26d5a5f96 not found: ID does not exist" Dec 13 04:18:01 crc kubenswrapper[4766]: I1213 04:18:01.624405 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="238f4260-a6bb-449e-b84e-305af303fd34" path="/var/lib/kubelet/pods/238f4260-a6bb-449e-b84e-305af303fd34/volumes" Dec 13 04:18:05 crc kubenswrapper[4766]: I1213 04:18:05.841064 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-4pshx_49cd51a9-7aa8-4927-b8ef-0dc5f08bbd81/kube-rbac-proxy/0.log" Dec 13 04:18:05 crc kubenswrapper[4766]: I1213 04:18:05.883832 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-4pshx_49cd51a9-7aa8-4927-b8ef-0dc5f08bbd81/controller/0.log" Dec 13 04:18:06 crc kubenswrapper[4766]: I1213 04:18:06.039114 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f2nj6_32d33161-c294-4ece-8783-25472b4ac4b7/cp-frr-files/0.log" Dec 13 04:18:06 crc kubenswrapper[4766]: I1213 04:18:06.212293 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f2nj6_32d33161-c294-4ece-8783-25472b4ac4b7/cp-frr-files/0.log" Dec 13 04:18:06 crc kubenswrapper[4766]: I1213 04:18:06.212327 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f2nj6_32d33161-c294-4ece-8783-25472b4ac4b7/cp-metrics/0.log" Dec 13 04:18:06 crc kubenswrapper[4766]: I1213 04:18:06.234753 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f2nj6_32d33161-c294-4ece-8783-25472b4ac4b7/cp-reloader/0.log" Dec 13 04:18:06 crc kubenswrapper[4766]: I1213 04:18:06.247760 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f2nj6_32d33161-c294-4ece-8783-25472b4ac4b7/cp-reloader/0.log" Dec 13 04:18:06 crc kubenswrapper[4766]: I1213 04:18:06.461943 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f2nj6_32d33161-c294-4ece-8783-25472b4ac4b7/cp-metrics/0.log" Dec 13 04:18:06 crc kubenswrapper[4766]: I1213 04:18:06.462065 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f2nj6_32d33161-c294-4ece-8783-25472b4ac4b7/cp-frr-files/0.log" Dec 13 04:18:06 crc kubenswrapper[4766]: I1213 04:18:06.489464 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f2nj6_32d33161-c294-4ece-8783-25472b4ac4b7/cp-reloader/0.log" Dec 13 04:18:06 crc kubenswrapper[4766]: I1213 04:18:06.494588 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f2nj6_32d33161-c294-4ece-8783-25472b4ac4b7/cp-metrics/0.log" Dec 13 04:18:06 crc kubenswrapper[4766]: I1213 04:18:06.663921 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f2nj6_32d33161-c294-4ece-8783-25472b4ac4b7/cp-frr-files/0.log" Dec 13 04:18:06 crc kubenswrapper[4766]: I1213 04:18:06.680255 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f2nj6_32d33161-c294-4ece-8783-25472b4ac4b7/cp-reloader/0.log" Dec 13 04:18:06 crc kubenswrapper[4766]: I1213 04:18:06.717089 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f2nj6_32d33161-c294-4ece-8783-25472b4ac4b7/cp-metrics/0.log" Dec 13 04:18:06 crc kubenswrapper[4766]: I1213 04:18:06.720187 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f2nj6_32d33161-c294-4ece-8783-25472b4ac4b7/controller/0.log" Dec 13 04:18:06 crc kubenswrapper[4766]: I1213 04:18:06.849130 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f2nj6_32d33161-c294-4ece-8783-25472b4ac4b7/frr-metrics/0.log" Dec 13 04:18:06 crc kubenswrapper[4766]: I1213 04:18:06.931772 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f2nj6_32d33161-c294-4ece-8783-25472b4ac4b7/kube-rbac-proxy-frr/0.log" Dec 13 04:18:06 crc kubenswrapper[4766]: I1213 04:18:06.943205 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f2nj6_32d33161-c294-4ece-8783-25472b4ac4b7/kube-rbac-proxy/0.log" Dec 13 04:18:07 crc kubenswrapper[4766]: I1213 04:18:07.116711 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f2nj6_32d33161-c294-4ece-8783-25472b4ac4b7/reloader/0.log" Dec 13 04:18:07 crc kubenswrapper[4766]: I1213 04:18:07.195810 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-98sqt_3873c53e-f61e-4d7e-bfe8-5f43ad0c49c5/frr-k8s-webhook-server/0.log" Dec 13 04:18:07 crc kubenswrapper[4766]: I1213 04:18:07.305133 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f2nj6_32d33161-c294-4ece-8783-25472b4ac4b7/frr/0.log" Dec 13 04:18:07 crc kubenswrapper[4766]: I1213 04:18:07.402149 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6468b8b4bf-7j6nm_92fe426f-df80-49bf-9259-7e04836a793f/manager/0.log" Dec 13 04:18:07 crc kubenswrapper[4766]: I1213 04:18:07.496989 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7fb5f44fc8-wqts7_b43cb171-915c-4cb3-bc9b-1525fe72213e/webhook-server/0.log" Dec 13 04:18:07 crc kubenswrapper[4766]: I1213 04:18:07.727873 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vft9m_fbd9aada-00ed-49a4-94b7-d96c1014fcfe/kube-rbac-proxy/0.log" Dec 13 04:18:07 crc kubenswrapper[4766]: I1213 04:18:07.896513 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vft9m_fbd9aada-00ed-49a4-94b7-d96c1014fcfe/speaker/0.log" Dec 13 04:18:21 crc kubenswrapper[4766]: I1213 04:18:21.541213 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_glance-464e-account-create-tswck_153113c7-f43b-44c2-91a1-a77f8b1e4def/mariadb-account-create/0.log" Dec 13 04:18:21 crc kubenswrapper[4766]: I1213 04:18:21.724964 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_glance-db-create-rv7lj_fa4020ff-9e3c-46c8-bea4-3b8ab951e6c6/mariadb-database-create/0.log" Dec 13 04:18:21 crc kubenswrapper[4766]: I1213 04:18:21.761350 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_glance-db-sync-vzt9z_e17456b9-737f-498f-bfe9-97257ae7ae6d/glance-db-sync/0.log" Dec 13 04:18:21 crc kubenswrapper[4766]: I1213 04:18:21.891472 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_glance-default-external-api-0_920897da-f7c3-455b-b8a9-1491e8543ed5/glance-api/0.log" Dec 13 04:18:21 crc kubenswrapper[4766]: I1213 04:18:21.894237 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_glance-default-external-api-0_920897da-f7c3-455b-b8a9-1491e8543ed5/glance-httpd/0.log" Dec 13 04:18:21 crc kubenswrapper[4766]: I1213 04:18:21.940710 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_glance-default-external-api-0_920897da-f7c3-455b-b8a9-1491e8543ed5/glance-log/0.log" Dec 13 04:18:22 crc kubenswrapper[4766]: I1213 04:18:22.040972 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_glance-default-internal-api-0_ac9d9fc3-8839-4a44-9b07-a981b6b61162/glance-api/0.log" Dec 13 04:18:22 crc kubenswrapper[4766]: I1213 04:18:22.108579 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_glance-default-internal-api-0_ac9d9fc3-8839-4a44-9b07-a981b6b61162/glance-httpd/0.log" Dec 13 04:18:22 crc kubenswrapper[4766]: I1213 04:18:22.153207 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_glance-default-internal-api-0_ac9d9fc3-8839-4a44-9b07-a981b6b61162/glance-log/0.log" Dec 13 04:18:22 crc kubenswrapper[4766]: I1213 04:18:22.493573 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_keystone-5cf4ff88f8-pvnzl_1b8808f5-1647-40bb-a071-0592681524fb/keystone-api/0.log" Dec 13 04:18:22 crc kubenswrapper[4766]: I1213 04:18:22.694183 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-0_8d1d4ee6-5f01-4c7b-b326-6f4dc686022e/mysql-bootstrap/0.log" Dec 13 04:18:22 crc kubenswrapper[4766]: I1213 04:18:22.755988 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-0_8d1d4ee6-5f01-4c7b-b326-6f4dc686022e/galera/0.log" Dec 13 04:18:22 crc kubenswrapper[4766]: I1213 04:18:22.759010 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-0_8d1d4ee6-5f01-4c7b-b326-6f4dc686022e/mysql-bootstrap/0.log" Dec 13 04:18:22 crc kubenswrapper[4766]: I1213 04:18:22.995568 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-1_cfe14027-5466-43b9-90a1-04ea55370210/mysql-bootstrap/0.log" Dec 13 04:18:23 crc kubenswrapper[4766]: I1213 04:18:23.257549 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-1_cfe14027-5466-43b9-90a1-04ea55370210/galera/0.log" Dec 13 04:18:23 crc kubenswrapper[4766]: I1213 04:18:23.299680 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_memcached-0_490a8c60-a83e-44ca-93ff-e5b802d5d20a/memcached/0.log" Dec 13 04:18:23 crc kubenswrapper[4766]: I1213 04:18:23.313563 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-1_cfe14027-5466-43b9-90a1-04ea55370210/mysql-bootstrap/0.log" Dec 13 04:18:23 crc kubenswrapper[4766]: I1213 04:18:23.442936 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-2_19b5d6fe-47e9-4816-907d-af0d46b556d2/mysql-bootstrap/0.log" Dec 13 04:18:23 crc kubenswrapper[4766]: I1213 04:18:23.657609 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-2_19b5d6fe-47e9-4816-907d-af0d46b556d2/galera/0.log" Dec 13 04:18:23 crc kubenswrapper[4766]: I1213 04:18:23.663517 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-2_19b5d6fe-47e9-4816-907d-af0d46b556d2/mysql-bootstrap/0.log" Dec 13 04:18:23 crc kubenswrapper[4766]: I1213 04:18:23.671180 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstackclient_d0d0d40c-f117-400f-aaec-1e339a7779c1/openstackclient/0.log" Dec 13 04:18:23 crc kubenswrapper[4766]: I1213 04:18:23.811159 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_rabbitmq-server-0_9ae00965-a778-4106-85dd-84fba5782c83/setup-container/0.log" Dec 13 04:18:24 crc kubenswrapper[4766]: I1213 04:18:24.017558 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_rabbitmq-server-0_9ae00965-a778-4106-85dd-84fba5782c83/setup-container/0.log" Dec 13 04:18:24 crc kubenswrapper[4766]: I1213 04:18:24.028588 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_rabbitmq-server-0_9ae00965-a778-4106-85dd-84fba5782c83/rabbitmq/0.log" Dec 13 04:18:24 crc kubenswrapper[4766]: I1213 04:18:24.114180 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-proxy-8cfd9857-9gfdt_d45f9168-9e25-4f26-9c4b-22fff074e16f/proxy-httpd/0.log" Dec 13 04:18:24 crc kubenswrapper[4766]: I1213 04:18:24.236022 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-proxy-8cfd9857-9gfdt_d45f9168-9e25-4f26-9c4b-22fff074e16f/proxy-server/0.log" Dec 13 04:18:24 crc kubenswrapper[4766]: I1213 04:18:24.283597 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-ring-rebalance-8hn4p_375c2b15-d870-4c02-bb26-f7deac6a4e81/swift-ring-rebalance/0.log" Dec 13 04:18:24 crc kubenswrapper[4766]: I1213 04:18:24.410551 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_9d937b76-c14b-462b-8de2-fecf78a9d3cf/account-reaper/0.log" Dec 13 04:18:24 crc kubenswrapper[4766]: I1213 04:18:24.494833 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_9d937b76-c14b-462b-8de2-fecf78a9d3cf/account-auditor/0.log" Dec 13 04:18:24 crc kubenswrapper[4766]: I1213 04:18:24.505487 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_9d937b76-c14b-462b-8de2-fecf78a9d3cf/account-replicator/0.log" Dec 13 04:18:24 crc kubenswrapper[4766]: I1213 04:18:24.556673 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_9d937b76-c14b-462b-8de2-fecf78a9d3cf/account-server/0.log" Dec 13 04:18:24 crc kubenswrapper[4766]: I1213 04:18:24.650555 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_9d937b76-c14b-462b-8de2-fecf78a9d3cf/container-auditor/0.log" Dec 13 04:18:24 crc kubenswrapper[4766]: I1213 04:18:24.728566 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_9d937b76-c14b-462b-8de2-fecf78a9d3cf/container-replicator/0.log" Dec 13 04:18:24 crc kubenswrapper[4766]: I1213 04:18:24.751805 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_9d937b76-c14b-462b-8de2-fecf78a9d3cf/container-server/0.log" Dec 13 04:18:24 crc kubenswrapper[4766]: I1213 04:18:24.766718 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_9d937b76-c14b-462b-8de2-fecf78a9d3cf/container-updater/0.log" Dec 13 04:18:24 crc kubenswrapper[4766]: I1213 04:18:24.858483 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_9d937b76-c14b-462b-8de2-fecf78a9d3cf/object-auditor/0.log" Dec 13 04:18:25 crc kubenswrapper[4766]: I1213 04:18:25.059662 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_9d937b76-c14b-462b-8de2-fecf78a9d3cf/object-expirer/0.log" Dec 13 04:18:25 crc kubenswrapper[4766]: I1213 04:18:25.076032 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_9d937b76-c14b-462b-8de2-fecf78a9d3cf/object-replicator/0.log" Dec 13 04:18:25 crc kubenswrapper[4766]: I1213 04:18:25.115812 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_9d937b76-c14b-462b-8de2-fecf78a9d3cf/object-server/0.log" Dec 13 04:18:25 crc kubenswrapper[4766]: I1213 04:18:25.219404 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_9d937b76-c14b-462b-8de2-fecf78a9d3cf/object-updater/0.log" Dec 13 04:18:25 crc kubenswrapper[4766]: I1213 04:18:25.253586 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_9d937b76-c14b-462b-8de2-fecf78a9d3cf/rsync/0.log" Dec 13 04:18:25 crc kubenswrapper[4766]: I1213 04:18:25.286915 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_9d937b76-c14b-462b-8de2-fecf78a9d3cf/swift-recon-cron/0.log" Dec 13 04:18:27 crc kubenswrapper[4766]: I1213 04:18:27.049154 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-create-rv7lj"] Dec 13 04:18:27 crc kubenswrapper[4766]: I1213 04:18:27.056293 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-create-rv7lj"] Dec 13 04:18:27 crc kubenswrapper[4766]: I1213 04:18:27.625774 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa4020ff-9e3c-46c8-bea4-3b8ab951e6c6" path="/var/lib/kubelet/pods/fa4020ff-9e3c-46c8-bea4-3b8ab951e6c6/volumes" Dec 13 04:18:35 crc kubenswrapper[4766]: I1213 04:18:35.435291 4766 scope.go:117] "RemoveContainer" containerID="cbe4177e5e7f133cb6b0f7fe441435ca3e3e95dd26f975f3e886d3e0244be004" Dec 13 04:18:37 crc kubenswrapper[4766]: I1213 04:18:37.024687 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-464e-account-create-tswck"] Dec 13 04:18:37 crc kubenswrapper[4766]: I1213 04:18:37.030261 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-464e-account-create-tswck"] Dec 13 04:18:37 crc kubenswrapper[4766]: I1213 04:18:37.626297 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="153113c7-f43b-44c2-91a1-a77f8b1e4def" path="/var/lib/kubelet/pods/153113c7-f43b-44c2-91a1-a77f8b1e4def/volumes" Dec 13 04:18:38 crc kubenswrapper[4766]: I1213 04:18:38.500583 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs_dd2797cc-1780-4828-b835-7bde5a0de2c4/util/0.log" Dec 13 04:18:38 crc kubenswrapper[4766]: I1213 04:18:38.828391 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs_dd2797cc-1780-4828-b835-7bde5a0de2c4/util/0.log" Dec 13 04:18:38 crc kubenswrapper[4766]: I1213 04:18:38.840785 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs_dd2797cc-1780-4828-b835-7bde5a0de2c4/pull/0.log" Dec 13 04:18:38 crc kubenswrapper[4766]: I1213 04:18:38.892378 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs_dd2797cc-1780-4828-b835-7bde5a0de2c4/pull/0.log" Dec 13 04:18:39 crc kubenswrapper[4766]: I1213 04:18:39.039989 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs_dd2797cc-1780-4828-b835-7bde5a0de2c4/util/0.log" Dec 13 04:18:39 crc kubenswrapper[4766]: I1213 04:18:39.046471 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs_dd2797cc-1780-4828-b835-7bde5a0de2c4/extract/0.log" Dec 13 04:18:39 crc kubenswrapper[4766]: I1213 04:18:39.051718 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49k5xs_dd2797cc-1780-4828-b835-7bde5a0de2c4/pull/0.log" Dec 13 04:18:39 crc kubenswrapper[4766]: I1213 04:18:39.227650 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s2k56_7d6c441f-c934-4352-8997-84aa50668ac0/extract-utilities/0.log" Dec 13 04:18:39 crc kubenswrapper[4766]: I1213 04:18:39.399627 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s2k56_7d6c441f-c934-4352-8997-84aa50668ac0/extract-utilities/0.log" Dec 13 04:18:39 crc kubenswrapper[4766]: I1213 04:18:39.426180 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s2k56_7d6c441f-c934-4352-8997-84aa50668ac0/extract-content/0.log" Dec 13 04:18:39 crc kubenswrapper[4766]: I1213 04:18:39.426597 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s2k56_7d6c441f-c934-4352-8997-84aa50668ac0/extract-content/0.log" Dec 13 04:18:39 crc kubenswrapper[4766]: I1213 04:18:39.614918 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s2k56_7d6c441f-c934-4352-8997-84aa50668ac0/extract-content/0.log" Dec 13 04:18:39 crc kubenswrapper[4766]: I1213 04:18:39.624823 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s2k56_7d6c441f-c934-4352-8997-84aa50668ac0/extract-utilities/0.log" Dec 13 04:18:39 crc kubenswrapper[4766]: I1213 04:18:39.863593 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tsghz_077b3190-346b-4de2-ae4a-b10ef4c0f635/extract-utilities/0.log" Dec 13 04:18:40 crc kubenswrapper[4766]: I1213 04:18:40.058188 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tsghz_077b3190-346b-4de2-ae4a-b10ef4c0f635/extract-content/0.log" Dec 13 04:18:40 crc kubenswrapper[4766]: I1213 04:18:40.065049 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tsghz_077b3190-346b-4de2-ae4a-b10ef4c0f635/extract-content/0.log" Dec 13 04:18:40 crc kubenswrapper[4766]: I1213 04:18:40.065847 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tsghz_077b3190-346b-4de2-ae4a-b10ef4c0f635/extract-utilities/0.log" Dec 13 04:18:40 crc kubenswrapper[4766]: I1213 04:18:40.277333 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tsghz_077b3190-346b-4de2-ae4a-b10ef4c0f635/extract-utilities/0.log" Dec 13 04:18:40 crc kubenswrapper[4766]: I1213 04:18:40.285958 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tsghz_077b3190-346b-4de2-ae4a-b10ef4c0f635/extract-content/0.log" Dec 13 04:18:40 crc kubenswrapper[4766]: I1213 04:18:40.492300 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-8fdvl_b992d64d-b068-4d78-aac9-7e0ff5eda198/marketplace-operator/0.log" Dec 13 04:18:40 crc kubenswrapper[4766]: I1213 04:18:40.666437 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v844g_38153d6a-6577-469c-aa93-6eb38dd85064/extract-utilities/0.log" Dec 13 04:18:40 crc kubenswrapper[4766]: I1213 04:18:40.875784 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v844g_38153d6a-6577-469c-aa93-6eb38dd85064/extract-utilities/0.log" Dec 13 04:18:40 crc kubenswrapper[4766]: I1213 04:18:40.899983 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v844g_38153d6a-6577-469c-aa93-6eb38dd85064/extract-content/0.log" Dec 13 04:18:41 crc kubenswrapper[4766]: I1213 04:18:41.066874 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s2k56_7d6c441f-c934-4352-8997-84aa50668ac0/registry-server/0.log" Dec 13 04:18:41 crc kubenswrapper[4766]: I1213 04:18:41.104019 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v844g_38153d6a-6577-469c-aa93-6eb38dd85064/extract-content/0.log" Dec 13 04:18:41 crc kubenswrapper[4766]: I1213 04:18:41.410708 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tsghz_077b3190-346b-4de2-ae4a-b10ef4c0f635/registry-server/0.log" Dec 13 04:18:41 crc kubenswrapper[4766]: I1213 04:18:41.418996 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v844g_38153d6a-6577-469c-aa93-6eb38dd85064/extract-content/0.log" Dec 13 04:18:41 crc kubenswrapper[4766]: I1213 04:18:41.419870 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v844g_38153d6a-6577-469c-aa93-6eb38dd85064/extract-utilities/0.log" Dec 13 04:18:41 crc kubenswrapper[4766]: I1213 04:18:41.551776 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v844g_38153d6a-6577-469c-aa93-6eb38dd85064/registry-server/0.log" Dec 13 04:18:41 crc kubenswrapper[4766]: I1213 04:18:41.633457 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4fp7m_388cd2e4-3bb2-4972-be1a-cc0dcc346746/extract-utilities/0.log" Dec 13 04:18:41 crc kubenswrapper[4766]: I1213 04:18:41.862694 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4fp7m_388cd2e4-3bb2-4972-be1a-cc0dcc346746/extract-content/0.log" Dec 13 04:18:41 crc kubenswrapper[4766]: I1213 04:18:41.870993 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4fp7m_388cd2e4-3bb2-4972-be1a-cc0dcc346746/extract-utilities/0.log" Dec 13 04:18:41 crc kubenswrapper[4766]: I1213 04:18:41.877524 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4fp7m_388cd2e4-3bb2-4972-be1a-cc0dcc346746/extract-content/0.log" Dec 13 04:18:42 crc kubenswrapper[4766]: I1213 04:18:42.015878 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4fp7m_388cd2e4-3bb2-4972-be1a-cc0dcc346746/extract-utilities/0.log" Dec 13 04:18:42 crc kubenswrapper[4766]: I1213 04:18:42.105882 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4fp7m_388cd2e4-3bb2-4972-be1a-cc0dcc346746/extract-content/0.log" Dec 13 04:18:42 crc kubenswrapper[4766]: I1213 04:18:42.482086 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4fp7m_388cd2e4-3bb2-4972-be1a-cc0dcc346746/registry-server/0.log" Dec 13 04:18:46 crc kubenswrapper[4766]: I1213 04:18:46.035161 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-sync-vzt9z"] Dec 13 04:18:46 crc kubenswrapper[4766]: I1213 04:18:46.041238 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-sync-vzt9z"] Dec 13 04:18:47 crc kubenswrapper[4766]: I1213 04:18:47.624314 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e17456b9-737f-498f-bfe9-97257ae7ae6d" path="/var/lib/kubelet/pods/e17456b9-737f-498f-bfe9-97257ae7ae6d/volumes" Dec 13 04:19:35 crc kubenswrapper[4766]: I1213 04:19:35.498929 4766 scope.go:117] "RemoveContainer" containerID="075e9291eb8d7dad8d7131f39c11abb1410af9a5fb91a0e69d03b3ce4f8ab67b" Dec 13 04:19:35 crc kubenswrapper[4766]: I1213 04:19:35.581722 4766 scope.go:117] "RemoveContainer" containerID="e93332c99f7a79af52ced1e12631e08ce73c8884516fc57749abbefa5be4ca88" Dec 13 04:19:55 crc kubenswrapper[4766]: I1213 04:19:55.543058 4766 generic.go:334] "Generic (PLEG): container finished" podID="d8df5171-edb0-4f50-81a5-4f0c8958cd7b" containerID="7231bcfecea81466ba110670ac091467430bc534f4f12ad4f80c7c93b93bf9c8" exitCode=0 Dec 13 04:19:55 crc kubenswrapper[4766]: I1213 04:19:55.543321 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-shstf/must-gather-5lz58" event={"ID":"d8df5171-edb0-4f50-81a5-4f0c8958cd7b","Type":"ContainerDied","Data":"7231bcfecea81466ba110670ac091467430bc534f4f12ad4f80c7c93b93bf9c8"} Dec 13 04:19:55 crc kubenswrapper[4766]: I1213 04:19:55.544162 4766 scope.go:117] "RemoveContainer" containerID="7231bcfecea81466ba110670ac091467430bc534f4f12ad4f80c7c93b93bf9c8" Dec 13 04:19:56 crc kubenswrapper[4766]: I1213 04:19:56.442629 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-shstf_must-gather-5lz58_d8df5171-edb0-4f50-81a5-4f0c8958cd7b/gather/0.log" Dec 13 04:19:57 crc kubenswrapper[4766]: I1213 04:19:57.911840 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pxs86"] Dec 13 04:19:57 crc kubenswrapper[4766]: E1213 04:19:57.912545 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="238f4260-a6bb-449e-b84e-305af303fd34" containerName="extract-utilities" Dec 13 04:19:57 crc kubenswrapper[4766]: I1213 04:19:57.912574 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="238f4260-a6bb-449e-b84e-305af303fd34" containerName="extract-utilities" Dec 13 04:19:57 crc kubenswrapper[4766]: E1213 04:19:57.912590 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="238f4260-a6bb-449e-b84e-305af303fd34" containerName="extract-content" Dec 13 04:19:57 crc kubenswrapper[4766]: I1213 04:19:57.912596 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="238f4260-a6bb-449e-b84e-305af303fd34" containerName="extract-content" Dec 13 04:19:57 crc kubenswrapper[4766]: E1213 04:19:57.912610 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="238f4260-a6bb-449e-b84e-305af303fd34" containerName="registry-server" Dec 13 04:19:57 crc kubenswrapper[4766]: I1213 04:19:57.912617 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="238f4260-a6bb-449e-b84e-305af303fd34" containerName="registry-server" Dec 13 04:19:57 crc kubenswrapper[4766]: I1213 04:19:57.912803 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="238f4260-a6bb-449e-b84e-305af303fd34" containerName="registry-server" Dec 13 04:19:57 crc kubenswrapper[4766]: I1213 04:19:57.913976 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pxs86" Dec 13 04:19:57 crc kubenswrapper[4766]: I1213 04:19:57.928291 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pxs86"] Dec 13 04:19:58 crc kubenswrapper[4766]: I1213 04:19:58.089105 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c08c5012-2c3c-47cb-9fac-3a3681831c6c-utilities\") pod \"certified-operators-pxs86\" (UID: \"c08c5012-2c3c-47cb-9fac-3a3681831c6c\") " pod="openshift-marketplace/certified-operators-pxs86" Dec 13 04:19:58 crc kubenswrapper[4766]: I1213 04:19:58.089270 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c08c5012-2c3c-47cb-9fac-3a3681831c6c-catalog-content\") pod \"certified-operators-pxs86\" (UID: \"c08c5012-2c3c-47cb-9fac-3a3681831c6c\") " pod="openshift-marketplace/certified-operators-pxs86" Dec 13 04:19:58 crc kubenswrapper[4766]: I1213 04:19:58.089868 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv6hb\" (UniqueName: \"kubernetes.io/projected/c08c5012-2c3c-47cb-9fac-3a3681831c6c-kube-api-access-tv6hb\") pod \"certified-operators-pxs86\" (UID: \"c08c5012-2c3c-47cb-9fac-3a3681831c6c\") " pod="openshift-marketplace/certified-operators-pxs86" Dec 13 04:19:58 crc kubenswrapper[4766]: I1213 04:19:58.191912 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c08c5012-2c3c-47cb-9fac-3a3681831c6c-utilities\") pod \"certified-operators-pxs86\" (UID: \"c08c5012-2c3c-47cb-9fac-3a3681831c6c\") " pod="openshift-marketplace/certified-operators-pxs86" Dec 13 04:19:58 crc kubenswrapper[4766]: I1213 04:19:58.192071 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c08c5012-2c3c-47cb-9fac-3a3681831c6c-catalog-content\") pod \"certified-operators-pxs86\" (UID: \"c08c5012-2c3c-47cb-9fac-3a3681831c6c\") " pod="openshift-marketplace/certified-operators-pxs86" Dec 13 04:19:58 crc kubenswrapper[4766]: I1213 04:19:58.192221 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv6hb\" (UniqueName: \"kubernetes.io/projected/c08c5012-2c3c-47cb-9fac-3a3681831c6c-kube-api-access-tv6hb\") pod \"certified-operators-pxs86\" (UID: \"c08c5012-2c3c-47cb-9fac-3a3681831c6c\") " pod="openshift-marketplace/certified-operators-pxs86" Dec 13 04:19:58 crc kubenswrapper[4766]: I1213 04:19:58.192492 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c08c5012-2c3c-47cb-9fac-3a3681831c6c-utilities\") pod \"certified-operators-pxs86\" (UID: \"c08c5012-2c3c-47cb-9fac-3a3681831c6c\") " pod="openshift-marketplace/certified-operators-pxs86" Dec 13 04:19:58 crc kubenswrapper[4766]: I1213 04:19:58.192635 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c08c5012-2c3c-47cb-9fac-3a3681831c6c-catalog-content\") pod \"certified-operators-pxs86\" (UID: \"c08c5012-2c3c-47cb-9fac-3a3681831c6c\") " pod="openshift-marketplace/certified-operators-pxs86" Dec 13 04:19:58 crc kubenswrapper[4766]: I1213 04:19:58.228542 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv6hb\" (UniqueName: \"kubernetes.io/projected/c08c5012-2c3c-47cb-9fac-3a3681831c6c-kube-api-access-tv6hb\") pod \"certified-operators-pxs86\" (UID: \"c08c5012-2c3c-47cb-9fac-3a3681831c6c\") " pod="openshift-marketplace/certified-operators-pxs86" Dec 13 04:19:58 crc kubenswrapper[4766]: I1213 04:19:58.236630 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pxs86" Dec 13 04:19:58 crc kubenswrapper[4766]: I1213 04:19:58.533247 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pxs86"] Dec 13 04:19:58 crc kubenswrapper[4766]: I1213 04:19:58.567769 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pxs86" event={"ID":"c08c5012-2c3c-47cb-9fac-3a3681831c6c","Type":"ContainerStarted","Data":"d0d1d01f7253dca478bf2eee494383f10cf8450a6f9cc519ea10d9cc4ffb56f4"} Dec 13 04:19:59 crc kubenswrapper[4766]: I1213 04:19:59.577543 4766 generic.go:334] "Generic (PLEG): container finished" podID="c08c5012-2c3c-47cb-9fac-3a3681831c6c" containerID="3d0e798a2473cfee374f8c4df901a0f2e55cb40ba9a628996a0b609693e64311" exitCode=0 Dec 13 04:19:59 crc kubenswrapper[4766]: I1213 04:19:59.577609 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pxs86" event={"ID":"c08c5012-2c3c-47cb-9fac-3a3681831c6c","Type":"ContainerDied","Data":"3d0e798a2473cfee374f8c4df901a0f2e55cb40ba9a628996a0b609693e64311"} Dec 13 04:20:00 crc kubenswrapper[4766]: I1213 04:20:00.588967 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pxs86" event={"ID":"c08c5012-2c3c-47cb-9fac-3a3681831c6c","Type":"ContainerStarted","Data":"88573baa17d5707339766b8c238f76bc98f4cc8587223fab4642ef15f1c7faed"} Dec 13 04:20:01 crc kubenswrapper[4766]: I1213 04:20:01.597867 4766 generic.go:334] "Generic (PLEG): container finished" podID="c08c5012-2c3c-47cb-9fac-3a3681831c6c" containerID="88573baa17d5707339766b8c238f76bc98f4cc8587223fab4642ef15f1c7faed" exitCode=0 Dec 13 04:20:01 crc kubenswrapper[4766]: I1213 04:20:01.597917 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pxs86" event={"ID":"c08c5012-2c3c-47cb-9fac-3a3681831c6c","Type":"ContainerDied","Data":"88573baa17d5707339766b8c238f76bc98f4cc8587223fab4642ef15f1c7faed"} Dec 13 04:20:02 crc kubenswrapper[4766]: I1213 04:20:02.608249 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pxs86" event={"ID":"c08c5012-2c3c-47cb-9fac-3a3681831c6c","Type":"ContainerStarted","Data":"5777dbb81d31fad7ed00220531cc878d26c55d69d69649cc1ec0a013cd7da10f"} Dec 13 04:20:02 crc kubenswrapper[4766]: I1213 04:20:02.628093 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pxs86" podStartSLOduration=3.062921723 podStartE2EDuration="5.628075838s" podCreationTimestamp="2025-12-13 04:19:57 +0000 UTC" firstStartedPulling="2025-12-13 04:19:59.579574923 +0000 UTC m=+2131.089507887" lastFinishedPulling="2025-12-13 04:20:02.144729038 +0000 UTC m=+2133.654662002" observedRunningTime="2025-12-13 04:20:02.624592679 +0000 UTC m=+2134.134525643" watchObservedRunningTime="2025-12-13 04:20:02.628075838 +0000 UTC m=+2134.138008792" Dec 13 04:20:04 crc kubenswrapper[4766]: I1213 04:20:04.423597 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-shstf/must-gather-5lz58"] Dec 13 04:20:04 crc kubenswrapper[4766]: I1213 04:20:04.424328 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-shstf/must-gather-5lz58" podUID="d8df5171-edb0-4f50-81a5-4f0c8958cd7b" containerName="copy" containerID="cri-o://de51d9807ed1e8af9fecf227810e255e110eb0229e58c3e2584c2bbdf2d2ea61" gracePeriod=2 Dec 13 04:20:04 crc kubenswrapper[4766]: I1213 04:20:04.432543 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-shstf/must-gather-5lz58"] Dec 13 04:20:04 crc kubenswrapper[4766]: I1213 04:20:04.624882 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-shstf_must-gather-5lz58_d8df5171-edb0-4f50-81a5-4f0c8958cd7b/copy/0.log" Dec 13 04:20:04 crc kubenswrapper[4766]: I1213 04:20:04.625367 4766 generic.go:334] "Generic (PLEG): container finished" podID="d8df5171-edb0-4f50-81a5-4f0c8958cd7b" containerID="de51d9807ed1e8af9fecf227810e255e110eb0229e58c3e2584c2bbdf2d2ea61" exitCode=143 Dec 13 04:20:04 crc kubenswrapper[4766]: I1213 04:20:04.805358 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-shstf_must-gather-5lz58_d8df5171-edb0-4f50-81a5-4f0c8958cd7b/copy/0.log" Dec 13 04:20:04 crc kubenswrapper[4766]: I1213 04:20:04.806109 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-shstf/must-gather-5lz58" Dec 13 04:20:04 crc kubenswrapper[4766]: I1213 04:20:04.910249 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb9vj\" (UniqueName: \"kubernetes.io/projected/d8df5171-edb0-4f50-81a5-4f0c8958cd7b-kube-api-access-rb9vj\") pod \"d8df5171-edb0-4f50-81a5-4f0c8958cd7b\" (UID: \"d8df5171-edb0-4f50-81a5-4f0c8958cd7b\") " Dec 13 04:20:04 crc kubenswrapper[4766]: I1213 04:20:04.910329 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d8df5171-edb0-4f50-81a5-4f0c8958cd7b-must-gather-output\") pod \"d8df5171-edb0-4f50-81a5-4f0c8958cd7b\" (UID: \"d8df5171-edb0-4f50-81a5-4f0c8958cd7b\") " Dec 13 04:20:04 crc kubenswrapper[4766]: I1213 04:20:04.920706 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8df5171-edb0-4f50-81a5-4f0c8958cd7b-kube-api-access-rb9vj" (OuterVolumeSpecName: "kube-api-access-rb9vj") pod "d8df5171-edb0-4f50-81a5-4f0c8958cd7b" (UID: "d8df5171-edb0-4f50-81a5-4f0c8958cd7b"). InnerVolumeSpecName "kube-api-access-rb9vj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:20:05 crc kubenswrapper[4766]: I1213 04:20:05.012244 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8df5171-edb0-4f50-81a5-4f0c8958cd7b-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "d8df5171-edb0-4f50-81a5-4f0c8958cd7b" (UID: "d8df5171-edb0-4f50-81a5-4f0c8958cd7b"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:20:05 crc kubenswrapper[4766]: I1213 04:20:05.012948 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rb9vj\" (UniqueName: \"kubernetes.io/projected/d8df5171-edb0-4f50-81a5-4f0c8958cd7b-kube-api-access-rb9vj\") on node \"crc\" DevicePath \"\"" Dec 13 04:20:05 crc kubenswrapper[4766]: I1213 04:20:05.013042 4766 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d8df5171-edb0-4f50-81a5-4f0c8958cd7b-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 13 04:20:05 crc kubenswrapper[4766]: I1213 04:20:05.625251 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8df5171-edb0-4f50-81a5-4f0c8958cd7b" path="/var/lib/kubelet/pods/d8df5171-edb0-4f50-81a5-4f0c8958cd7b/volumes" Dec 13 04:20:05 crc kubenswrapper[4766]: I1213 04:20:05.640005 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-shstf_must-gather-5lz58_d8df5171-edb0-4f50-81a5-4f0c8958cd7b/copy/0.log" Dec 13 04:20:05 crc kubenswrapper[4766]: I1213 04:20:05.640778 4766 scope.go:117] "RemoveContainer" containerID="de51d9807ed1e8af9fecf227810e255e110eb0229e58c3e2584c2bbdf2d2ea61" Dec 13 04:20:05 crc kubenswrapper[4766]: I1213 04:20:05.641127 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-shstf/must-gather-5lz58" Dec 13 04:20:05 crc kubenswrapper[4766]: I1213 04:20:05.694631 4766 scope.go:117] "RemoveContainer" containerID="7231bcfecea81466ba110670ac091467430bc534f4f12ad4f80c7c93b93bf9c8" Dec 13 04:20:08 crc kubenswrapper[4766]: I1213 04:20:08.237778 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pxs86" Dec 13 04:20:08 crc kubenswrapper[4766]: I1213 04:20:08.238717 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pxs86" Dec 13 04:20:08 crc kubenswrapper[4766]: I1213 04:20:08.284469 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pxs86" Dec 13 04:20:08 crc kubenswrapper[4766]: I1213 04:20:08.706657 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pxs86" Dec 13 04:20:09 crc kubenswrapper[4766]: I1213 04:20:09.732787 4766 patch_prober.go:28] interesting pod/machine-config-daemon-94w9l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 04:20:09 crc kubenswrapper[4766]: I1213 04:20:09.732891 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 04:20:09 crc kubenswrapper[4766]: I1213 04:20:09.910235 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pxs86"] Dec 13 04:20:10 crc kubenswrapper[4766]: I1213 04:20:10.678087 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pxs86" podUID="c08c5012-2c3c-47cb-9fac-3a3681831c6c" containerName="registry-server" containerID="cri-o://5777dbb81d31fad7ed00220531cc878d26c55d69d69649cc1ec0a013cd7da10f" gracePeriod=2 Dec 13 04:20:11 crc kubenswrapper[4766]: I1213 04:20:11.149531 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pxs86" Dec 13 04:20:11 crc kubenswrapper[4766]: I1213 04:20:11.313705 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c08c5012-2c3c-47cb-9fac-3a3681831c6c-catalog-content\") pod \"c08c5012-2c3c-47cb-9fac-3a3681831c6c\" (UID: \"c08c5012-2c3c-47cb-9fac-3a3681831c6c\") " Dec 13 04:20:11 crc kubenswrapper[4766]: I1213 04:20:11.313901 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tv6hb\" (UniqueName: \"kubernetes.io/projected/c08c5012-2c3c-47cb-9fac-3a3681831c6c-kube-api-access-tv6hb\") pod \"c08c5012-2c3c-47cb-9fac-3a3681831c6c\" (UID: \"c08c5012-2c3c-47cb-9fac-3a3681831c6c\") " Dec 13 04:20:11 crc kubenswrapper[4766]: I1213 04:20:11.313953 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c08c5012-2c3c-47cb-9fac-3a3681831c6c-utilities\") pod \"c08c5012-2c3c-47cb-9fac-3a3681831c6c\" (UID: \"c08c5012-2c3c-47cb-9fac-3a3681831c6c\") " Dec 13 04:20:11 crc kubenswrapper[4766]: I1213 04:20:11.315835 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c08c5012-2c3c-47cb-9fac-3a3681831c6c-utilities" (OuterVolumeSpecName: "utilities") pod "c08c5012-2c3c-47cb-9fac-3a3681831c6c" (UID: "c08c5012-2c3c-47cb-9fac-3a3681831c6c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:20:11 crc kubenswrapper[4766]: I1213 04:20:11.324567 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c08c5012-2c3c-47cb-9fac-3a3681831c6c-kube-api-access-tv6hb" (OuterVolumeSpecName: "kube-api-access-tv6hb") pod "c08c5012-2c3c-47cb-9fac-3a3681831c6c" (UID: "c08c5012-2c3c-47cb-9fac-3a3681831c6c"). InnerVolumeSpecName "kube-api-access-tv6hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:20:11 crc kubenswrapper[4766]: I1213 04:20:11.416755 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tv6hb\" (UniqueName: \"kubernetes.io/projected/c08c5012-2c3c-47cb-9fac-3a3681831c6c-kube-api-access-tv6hb\") on node \"crc\" DevicePath \"\"" Dec 13 04:20:11 crc kubenswrapper[4766]: I1213 04:20:11.416842 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c08c5012-2c3c-47cb-9fac-3a3681831c6c-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 04:20:11 crc kubenswrapper[4766]: I1213 04:20:11.572349 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c08c5012-2c3c-47cb-9fac-3a3681831c6c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c08c5012-2c3c-47cb-9fac-3a3681831c6c" (UID: "c08c5012-2c3c-47cb-9fac-3a3681831c6c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:20:11 crc kubenswrapper[4766]: I1213 04:20:11.629368 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c08c5012-2c3c-47cb-9fac-3a3681831c6c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 04:20:11 crc kubenswrapper[4766]: I1213 04:20:11.689253 4766 generic.go:334] "Generic (PLEG): container finished" podID="c08c5012-2c3c-47cb-9fac-3a3681831c6c" containerID="5777dbb81d31fad7ed00220531cc878d26c55d69d69649cc1ec0a013cd7da10f" exitCode=0 Dec 13 04:20:11 crc kubenswrapper[4766]: I1213 04:20:11.689357 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pxs86" Dec 13 04:20:11 crc kubenswrapper[4766]: I1213 04:20:11.689350 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pxs86" event={"ID":"c08c5012-2c3c-47cb-9fac-3a3681831c6c","Type":"ContainerDied","Data":"5777dbb81d31fad7ed00220531cc878d26c55d69d69649cc1ec0a013cd7da10f"} Dec 13 04:20:11 crc kubenswrapper[4766]: I1213 04:20:11.689586 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pxs86" event={"ID":"c08c5012-2c3c-47cb-9fac-3a3681831c6c","Type":"ContainerDied","Data":"d0d1d01f7253dca478bf2eee494383f10cf8450a6f9cc519ea10d9cc4ffb56f4"} Dec 13 04:20:11 crc kubenswrapper[4766]: I1213 04:20:11.689621 4766 scope.go:117] "RemoveContainer" containerID="5777dbb81d31fad7ed00220531cc878d26c55d69d69649cc1ec0a013cd7da10f" Dec 13 04:20:11 crc kubenswrapper[4766]: I1213 04:20:11.717716 4766 scope.go:117] "RemoveContainer" containerID="88573baa17d5707339766b8c238f76bc98f4cc8587223fab4642ef15f1c7faed" Dec 13 04:20:11 crc kubenswrapper[4766]: I1213 04:20:11.724628 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pxs86"] Dec 13 04:20:11 crc kubenswrapper[4766]: I1213 04:20:11.739633 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pxs86"] Dec 13 04:20:11 crc kubenswrapper[4766]: I1213 04:20:11.748199 4766 scope.go:117] "RemoveContainer" containerID="3d0e798a2473cfee374f8c4df901a0f2e55cb40ba9a628996a0b609693e64311" Dec 13 04:20:11 crc kubenswrapper[4766]: I1213 04:20:11.780807 4766 scope.go:117] "RemoveContainer" containerID="5777dbb81d31fad7ed00220531cc878d26c55d69d69649cc1ec0a013cd7da10f" Dec 13 04:20:11 crc kubenswrapper[4766]: E1213 04:20:11.782001 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5777dbb81d31fad7ed00220531cc878d26c55d69d69649cc1ec0a013cd7da10f\": container with ID starting with 5777dbb81d31fad7ed00220531cc878d26c55d69d69649cc1ec0a013cd7da10f not found: ID does not exist" containerID="5777dbb81d31fad7ed00220531cc878d26c55d69d69649cc1ec0a013cd7da10f" Dec 13 04:20:11 crc kubenswrapper[4766]: I1213 04:20:11.782075 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5777dbb81d31fad7ed00220531cc878d26c55d69d69649cc1ec0a013cd7da10f"} err="failed to get container status \"5777dbb81d31fad7ed00220531cc878d26c55d69d69649cc1ec0a013cd7da10f\": rpc error: code = NotFound desc = could not find container \"5777dbb81d31fad7ed00220531cc878d26c55d69d69649cc1ec0a013cd7da10f\": container with ID starting with 5777dbb81d31fad7ed00220531cc878d26c55d69d69649cc1ec0a013cd7da10f not found: ID does not exist" Dec 13 04:20:11 crc kubenswrapper[4766]: I1213 04:20:11.782119 4766 scope.go:117] "RemoveContainer" containerID="88573baa17d5707339766b8c238f76bc98f4cc8587223fab4642ef15f1c7faed" Dec 13 04:20:11 crc kubenswrapper[4766]: E1213 04:20:11.782697 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88573baa17d5707339766b8c238f76bc98f4cc8587223fab4642ef15f1c7faed\": container with ID starting with 88573baa17d5707339766b8c238f76bc98f4cc8587223fab4642ef15f1c7faed not found: ID does not exist" containerID="88573baa17d5707339766b8c238f76bc98f4cc8587223fab4642ef15f1c7faed" Dec 13 04:20:11 crc kubenswrapper[4766]: I1213 04:20:11.782730 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88573baa17d5707339766b8c238f76bc98f4cc8587223fab4642ef15f1c7faed"} err="failed to get container status \"88573baa17d5707339766b8c238f76bc98f4cc8587223fab4642ef15f1c7faed\": rpc error: code = NotFound desc = could not find container \"88573baa17d5707339766b8c238f76bc98f4cc8587223fab4642ef15f1c7faed\": container with ID starting with 88573baa17d5707339766b8c238f76bc98f4cc8587223fab4642ef15f1c7faed not found: ID does not exist" Dec 13 04:20:11 crc kubenswrapper[4766]: I1213 04:20:11.782748 4766 scope.go:117] "RemoveContainer" containerID="3d0e798a2473cfee374f8c4df901a0f2e55cb40ba9a628996a0b609693e64311" Dec 13 04:20:11 crc kubenswrapper[4766]: E1213 04:20:11.783382 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d0e798a2473cfee374f8c4df901a0f2e55cb40ba9a628996a0b609693e64311\": container with ID starting with 3d0e798a2473cfee374f8c4df901a0f2e55cb40ba9a628996a0b609693e64311 not found: ID does not exist" containerID="3d0e798a2473cfee374f8c4df901a0f2e55cb40ba9a628996a0b609693e64311" Dec 13 04:20:11 crc kubenswrapper[4766]: I1213 04:20:11.783540 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d0e798a2473cfee374f8c4df901a0f2e55cb40ba9a628996a0b609693e64311"} err="failed to get container status \"3d0e798a2473cfee374f8c4df901a0f2e55cb40ba9a628996a0b609693e64311\": rpc error: code = NotFound desc = could not find container \"3d0e798a2473cfee374f8c4df901a0f2e55cb40ba9a628996a0b609693e64311\": container with ID starting with 3d0e798a2473cfee374f8c4df901a0f2e55cb40ba9a628996a0b609693e64311 not found: ID does not exist" Dec 13 04:20:13 crc kubenswrapper[4766]: I1213 04:20:13.629966 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c08c5012-2c3c-47cb-9fac-3a3681831c6c" path="/var/lib/kubelet/pods/c08c5012-2c3c-47cb-9fac-3a3681831c6c/volumes" Dec 13 04:20:39 crc kubenswrapper[4766]: I1213 04:20:39.732412 4766 patch_prober.go:28] interesting pod/machine-config-daemon-94w9l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 04:20:39 crc kubenswrapper[4766]: I1213 04:20:39.733084 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 04:21:09 crc kubenswrapper[4766]: I1213 04:21:09.733557 4766 patch_prober.go:28] interesting pod/machine-config-daemon-94w9l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 04:21:09 crc kubenswrapper[4766]: I1213 04:21:09.734292 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 04:21:09 crc kubenswrapper[4766]: I1213 04:21:09.734365 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" Dec 13 04:21:09 crc kubenswrapper[4766]: I1213 04:21:09.735176 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"03150f8860bfe2b1d231473bb9db0604670a18da80382af1a0bad59963242429"} pod="openshift-machine-config-operator/machine-config-daemon-94w9l" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 13 04:21:09 crc kubenswrapper[4766]: I1213 04:21:09.735253 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" containerID="cri-o://03150f8860bfe2b1d231473bb9db0604670a18da80382af1a0bad59963242429" gracePeriod=600 Dec 13 04:21:10 crc kubenswrapper[4766]: I1213 04:21:10.173293 4766 generic.go:334] "Generic (PLEG): container finished" podID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerID="03150f8860bfe2b1d231473bb9db0604670a18da80382af1a0bad59963242429" exitCode=0 Dec 13 04:21:10 crc kubenswrapper[4766]: I1213 04:21:10.173601 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" event={"ID":"71e6a48b-4f5d-4299-9c7b-98dbe11e670e","Type":"ContainerDied","Data":"03150f8860bfe2b1d231473bb9db0604670a18da80382af1a0bad59963242429"} Dec 13 04:21:10 crc kubenswrapper[4766]: I1213 04:21:10.173658 4766 scope.go:117] "RemoveContainer" containerID="73b359d6042adaed91cafcdce588efdce0b2235b226906aa3955206639853ae2" Dec 13 04:21:11 crc kubenswrapper[4766]: I1213 04:21:11.184565 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" event={"ID":"71e6a48b-4f5d-4299-9c7b-98dbe11e670e","Type":"ContainerStarted","Data":"d4376e982a45edac27b9daaf4d018917ab407aedfa8250e17a93fe67739e0455"} Dec 13 04:21:38 crc kubenswrapper[4766]: I1213 04:21:38.707185 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pqhtr"] Dec 13 04:21:38 crc kubenswrapper[4766]: E1213 04:21:38.708236 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c08c5012-2c3c-47cb-9fac-3a3681831c6c" containerName="registry-server" Dec 13 04:21:38 crc kubenswrapper[4766]: I1213 04:21:38.708276 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c08c5012-2c3c-47cb-9fac-3a3681831c6c" containerName="registry-server" Dec 13 04:21:38 crc kubenswrapper[4766]: E1213 04:21:38.708303 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c08c5012-2c3c-47cb-9fac-3a3681831c6c" containerName="extract-content" Dec 13 04:21:38 crc kubenswrapper[4766]: I1213 04:21:38.708315 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c08c5012-2c3c-47cb-9fac-3a3681831c6c" containerName="extract-content" Dec 13 04:21:38 crc kubenswrapper[4766]: E1213 04:21:38.708335 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8df5171-edb0-4f50-81a5-4f0c8958cd7b" containerName="gather" Dec 13 04:21:38 crc kubenswrapper[4766]: I1213 04:21:38.708347 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8df5171-edb0-4f50-81a5-4f0c8958cd7b" containerName="gather" Dec 13 04:21:38 crc kubenswrapper[4766]: E1213 04:21:38.708378 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8df5171-edb0-4f50-81a5-4f0c8958cd7b" containerName="copy" Dec 13 04:21:38 crc kubenswrapper[4766]: I1213 04:21:38.708389 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8df5171-edb0-4f50-81a5-4f0c8958cd7b" containerName="copy" Dec 13 04:21:38 crc kubenswrapper[4766]: E1213 04:21:38.708402 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c08c5012-2c3c-47cb-9fac-3a3681831c6c" containerName="extract-utilities" Dec 13 04:21:38 crc kubenswrapper[4766]: I1213 04:21:38.708413 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c08c5012-2c3c-47cb-9fac-3a3681831c6c" containerName="extract-utilities" Dec 13 04:21:38 crc kubenswrapper[4766]: I1213 04:21:38.708700 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8df5171-edb0-4f50-81a5-4f0c8958cd7b" containerName="gather" Dec 13 04:21:38 crc kubenswrapper[4766]: I1213 04:21:38.708738 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c08c5012-2c3c-47cb-9fac-3a3681831c6c" containerName="registry-server" Dec 13 04:21:38 crc kubenswrapper[4766]: I1213 04:21:38.708767 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8df5171-edb0-4f50-81a5-4f0c8958cd7b" containerName="copy" Dec 13 04:21:38 crc kubenswrapper[4766]: I1213 04:21:38.710411 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pqhtr" Dec 13 04:21:38 crc kubenswrapper[4766]: I1213 04:21:38.746770 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pqhtr"] Dec 13 04:21:38 crc kubenswrapper[4766]: I1213 04:21:38.767176 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fec1c27-d573-4d8f-b8d3-d42a933f256f-utilities\") pod \"redhat-marketplace-pqhtr\" (UID: \"0fec1c27-d573-4d8f-b8d3-d42a933f256f\") " pod="openshift-marketplace/redhat-marketplace-pqhtr" Dec 13 04:21:38 crc kubenswrapper[4766]: I1213 04:21:38.767519 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fec1c27-d573-4d8f-b8d3-d42a933f256f-catalog-content\") pod \"redhat-marketplace-pqhtr\" (UID: \"0fec1c27-d573-4d8f-b8d3-d42a933f256f\") " pod="openshift-marketplace/redhat-marketplace-pqhtr" Dec 13 04:21:38 crc kubenswrapper[4766]: I1213 04:21:38.767710 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swwq8\" (UniqueName: \"kubernetes.io/projected/0fec1c27-d573-4d8f-b8d3-d42a933f256f-kube-api-access-swwq8\") pod \"redhat-marketplace-pqhtr\" (UID: \"0fec1c27-d573-4d8f-b8d3-d42a933f256f\") " pod="openshift-marketplace/redhat-marketplace-pqhtr" Dec 13 04:21:38 crc kubenswrapper[4766]: I1213 04:21:38.873834 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swwq8\" (UniqueName: \"kubernetes.io/projected/0fec1c27-d573-4d8f-b8d3-d42a933f256f-kube-api-access-swwq8\") pod \"redhat-marketplace-pqhtr\" (UID: \"0fec1c27-d573-4d8f-b8d3-d42a933f256f\") " pod="openshift-marketplace/redhat-marketplace-pqhtr" Dec 13 04:21:38 crc kubenswrapper[4766]: I1213 04:21:38.873931 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fec1c27-d573-4d8f-b8d3-d42a933f256f-utilities\") pod \"redhat-marketplace-pqhtr\" (UID: \"0fec1c27-d573-4d8f-b8d3-d42a933f256f\") " pod="openshift-marketplace/redhat-marketplace-pqhtr" Dec 13 04:21:38 crc kubenswrapper[4766]: I1213 04:21:38.874020 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fec1c27-d573-4d8f-b8d3-d42a933f256f-catalog-content\") pod \"redhat-marketplace-pqhtr\" (UID: \"0fec1c27-d573-4d8f-b8d3-d42a933f256f\") " pod="openshift-marketplace/redhat-marketplace-pqhtr" Dec 13 04:21:38 crc kubenswrapper[4766]: I1213 04:21:38.874779 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fec1c27-d573-4d8f-b8d3-d42a933f256f-catalog-content\") pod \"redhat-marketplace-pqhtr\" (UID: \"0fec1c27-d573-4d8f-b8d3-d42a933f256f\") " pod="openshift-marketplace/redhat-marketplace-pqhtr" Dec 13 04:21:38 crc kubenswrapper[4766]: I1213 04:21:38.875581 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fec1c27-d573-4d8f-b8d3-d42a933f256f-utilities\") pod \"redhat-marketplace-pqhtr\" (UID: \"0fec1c27-d573-4d8f-b8d3-d42a933f256f\") " pod="openshift-marketplace/redhat-marketplace-pqhtr" Dec 13 04:21:38 crc kubenswrapper[4766]: I1213 04:21:38.910023 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swwq8\" (UniqueName: \"kubernetes.io/projected/0fec1c27-d573-4d8f-b8d3-d42a933f256f-kube-api-access-swwq8\") pod \"redhat-marketplace-pqhtr\" (UID: \"0fec1c27-d573-4d8f-b8d3-d42a933f256f\") " pod="openshift-marketplace/redhat-marketplace-pqhtr" Dec 13 04:21:39 crc kubenswrapper[4766]: I1213 04:21:39.036465 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pqhtr" Dec 13 04:21:39 crc kubenswrapper[4766]: I1213 04:21:39.494630 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pqhtr"] Dec 13 04:21:40 crc kubenswrapper[4766]: I1213 04:21:40.412611 4766 generic.go:334] "Generic (PLEG): container finished" podID="0fec1c27-d573-4d8f-b8d3-d42a933f256f" containerID="f59e7f3dd1fded9d7f8778b6cfa1521ee507edf770f8b03576ba7072f4910c1e" exitCode=0 Dec 13 04:21:40 crc kubenswrapper[4766]: I1213 04:21:40.412672 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pqhtr" event={"ID":"0fec1c27-d573-4d8f-b8d3-d42a933f256f","Type":"ContainerDied","Data":"f59e7f3dd1fded9d7f8778b6cfa1521ee507edf770f8b03576ba7072f4910c1e"} Dec 13 04:21:40 crc kubenswrapper[4766]: I1213 04:21:40.412725 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pqhtr" event={"ID":"0fec1c27-d573-4d8f-b8d3-d42a933f256f","Type":"ContainerStarted","Data":"a0f11b08199d90d467adb57027fcf5d57045a7f166a7a57cc044f316077f6234"} Dec 13 04:21:41 crc kubenswrapper[4766]: I1213 04:21:41.422382 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pqhtr" event={"ID":"0fec1c27-d573-4d8f-b8d3-d42a933f256f","Type":"ContainerStarted","Data":"903749d2f3ff03c28fd5625aeaaca2f7a8092464862c769d429e9a1d53977aa3"} Dec 13 04:21:42 crc kubenswrapper[4766]: I1213 04:21:42.432082 4766 generic.go:334] "Generic (PLEG): container finished" podID="0fec1c27-d573-4d8f-b8d3-d42a933f256f" containerID="903749d2f3ff03c28fd5625aeaaca2f7a8092464862c769d429e9a1d53977aa3" exitCode=0 Dec 13 04:21:42 crc kubenswrapper[4766]: I1213 04:21:42.432138 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pqhtr" event={"ID":"0fec1c27-d573-4d8f-b8d3-d42a933f256f","Type":"ContainerDied","Data":"903749d2f3ff03c28fd5625aeaaca2f7a8092464862c769d429e9a1d53977aa3"} Dec 13 04:21:43 crc kubenswrapper[4766]: I1213 04:21:43.441571 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pqhtr" event={"ID":"0fec1c27-d573-4d8f-b8d3-d42a933f256f","Type":"ContainerStarted","Data":"9d64a3cc585c1214cd0bcc0b238ada2e334d9d2ae6a8ddb25ab644132462302a"} Dec 13 04:21:43 crc kubenswrapper[4766]: I1213 04:21:43.462056 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pqhtr" podStartSLOduration=3.0368141 podStartE2EDuration="5.462039837s" podCreationTimestamp="2025-12-13 04:21:38 +0000 UTC" firstStartedPulling="2025-12-13 04:21:40.414399026 +0000 UTC m=+2231.924331990" lastFinishedPulling="2025-12-13 04:21:42.839624763 +0000 UTC m=+2234.349557727" observedRunningTime="2025-12-13 04:21:43.45683166 +0000 UTC m=+2234.966764634" watchObservedRunningTime="2025-12-13 04:21:43.462039837 +0000 UTC m=+2234.971972801" Dec 13 04:21:49 crc kubenswrapper[4766]: I1213 04:21:49.036669 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pqhtr" Dec 13 04:21:49 crc kubenswrapper[4766]: I1213 04:21:49.037349 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pqhtr" Dec 13 04:21:49 crc kubenswrapper[4766]: I1213 04:21:49.086404 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pqhtr" Dec 13 04:21:49 crc kubenswrapper[4766]: I1213 04:21:49.554723 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pqhtr" Dec 13 04:21:49 crc kubenswrapper[4766]: I1213 04:21:49.602232 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pqhtr"] Dec 13 04:21:51 crc kubenswrapper[4766]: I1213 04:21:51.511822 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pqhtr" podUID="0fec1c27-d573-4d8f-b8d3-d42a933f256f" containerName="registry-server" containerID="cri-o://9d64a3cc585c1214cd0bcc0b238ada2e334d9d2ae6a8ddb25ab644132462302a" gracePeriod=2 Dec 13 04:21:51 crc kubenswrapper[4766]: I1213 04:21:51.954704 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pqhtr" Dec 13 04:21:52 crc kubenswrapper[4766]: I1213 04:21:52.129053 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fec1c27-d573-4d8f-b8d3-d42a933f256f-utilities\") pod \"0fec1c27-d573-4d8f-b8d3-d42a933f256f\" (UID: \"0fec1c27-d573-4d8f-b8d3-d42a933f256f\") " Dec 13 04:21:52 crc kubenswrapper[4766]: I1213 04:21:52.129115 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fec1c27-d573-4d8f-b8d3-d42a933f256f-catalog-content\") pod \"0fec1c27-d573-4d8f-b8d3-d42a933f256f\" (UID: \"0fec1c27-d573-4d8f-b8d3-d42a933f256f\") " Dec 13 04:21:52 crc kubenswrapper[4766]: I1213 04:21:52.129134 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swwq8\" (UniqueName: \"kubernetes.io/projected/0fec1c27-d573-4d8f-b8d3-d42a933f256f-kube-api-access-swwq8\") pod \"0fec1c27-d573-4d8f-b8d3-d42a933f256f\" (UID: \"0fec1c27-d573-4d8f-b8d3-d42a933f256f\") " Dec 13 04:21:52 crc kubenswrapper[4766]: I1213 04:21:52.129950 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fec1c27-d573-4d8f-b8d3-d42a933f256f-utilities" (OuterVolumeSpecName: "utilities") pod "0fec1c27-d573-4d8f-b8d3-d42a933f256f" (UID: "0fec1c27-d573-4d8f-b8d3-d42a933f256f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:21:52 crc kubenswrapper[4766]: I1213 04:21:52.142662 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fec1c27-d573-4d8f-b8d3-d42a933f256f-kube-api-access-swwq8" (OuterVolumeSpecName: "kube-api-access-swwq8") pod "0fec1c27-d573-4d8f-b8d3-d42a933f256f" (UID: "0fec1c27-d573-4d8f-b8d3-d42a933f256f"). InnerVolumeSpecName "kube-api-access-swwq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:21:52 crc kubenswrapper[4766]: I1213 04:21:52.149771 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fec1c27-d573-4d8f-b8d3-d42a933f256f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0fec1c27-d573-4d8f-b8d3-d42a933f256f" (UID: "0fec1c27-d573-4d8f-b8d3-d42a933f256f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:21:52 crc kubenswrapper[4766]: I1213 04:21:52.231125 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fec1c27-d573-4d8f-b8d3-d42a933f256f-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 04:21:52 crc kubenswrapper[4766]: I1213 04:21:52.231210 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fec1c27-d573-4d8f-b8d3-d42a933f256f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 04:21:52 crc kubenswrapper[4766]: I1213 04:21:52.231222 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swwq8\" (UniqueName: \"kubernetes.io/projected/0fec1c27-d573-4d8f-b8d3-d42a933f256f-kube-api-access-swwq8\") on node \"crc\" DevicePath \"\"" Dec 13 04:21:52 crc kubenswrapper[4766]: I1213 04:21:52.521674 4766 generic.go:334] "Generic (PLEG): container finished" podID="0fec1c27-d573-4d8f-b8d3-d42a933f256f" containerID="9d64a3cc585c1214cd0bcc0b238ada2e334d9d2ae6a8ddb25ab644132462302a" exitCode=0 Dec 13 04:21:52 crc kubenswrapper[4766]: I1213 04:21:52.521721 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pqhtr" event={"ID":"0fec1c27-d573-4d8f-b8d3-d42a933f256f","Type":"ContainerDied","Data":"9d64a3cc585c1214cd0bcc0b238ada2e334d9d2ae6a8ddb25ab644132462302a"} Dec 13 04:21:52 crc kubenswrapper[4766]: I1213 04:21:52.521766 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pqhtr" Dec 13 04:21:52 crc kubenswrapper[4766]: I1213 04:21:52.521774 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pqhtr" event={"ID":"0fec1c27-d573-4d8f-b8d3-d42a933f256f","Type":"ContainerDied","Data":"a0f11b08199d90d467adb57027fcf5d57045a7f166a7a57cc044f316077f6234"} Dec 13 04:21:52 crc kubenswrapper[4766]: I1213 04:21:52.521787 4766 scope.go:117] "RemoveContainer" containerID="9d64a3cc585c1214cd0bcc0b238ada2e334d9d2ae6a8ddb25ab644132462302a" Dec 13 04:21:52 crc kubenswrapper[4766]: I1213 04:21:52.548196 4766 scope.go:117] "RemoveContainer" containerID="903749d2f3ff03c28fd5625aeaaca2f7a8092464862c769d429e9a1d53977aa3" Dec 13 04:21:52 crc kubenswrapper[4766]: I1213 04:21:52.564420 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pqhtr"] Dec 13 04:21:52 crc kubenswrapper[4766]: I1213 04:21:52.573941 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pqhtr"] Dec 13 04:21:52 crc kubenswrapper[4766]: I1213 04:21:52.586784 4766 scope.go:117] "RemoveContainer" containerID="f59e7f3dd1fded9d7f8778b6cfa1521ee507edf770f8b03576ba7072f4910c1e" Dec 13 04:21:52 crc kubenswrapper[4766]: I1213 04:21:52.611799 4766 scope.go:117] "RemoveContainer" containerID="9d64a3cc585c1214cd0bcc0b238ada2e334d9d2ae6a8ddb25ab644132462302a" Dec 13 04:21:52 crc kubenswrapper[4766]: E1213 04:21:52.612364 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d64a3cc585c1214cd0bcc0b238ada2e334d9d2ae6a8ddb25ab644132462302a\": container with ID starting with 9d64a3cc585c1214cd0bcc0b238ada2e334d9d2ae6a8ddb25ab644132462302a not found: ID does not exist" containerID="9d64a3cc585c1214cd0bcc0b238ada2e334d9d2ae6a8ddb25ab644132462302a" Dec 13 04:21:52 crc kubenswrapper[4766]: I1213 04:21:52.612413 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d64a3cc585c1214cd0bcc0b238ada2e334d9d2ae6a8ddb25ab644132462302a"} err="failed to get container status \"9d64a3cc585c1214cd0bcc0b238ada2e334d9d2ae6a8ddb25ab644132462302a\": rpc error: code = NotFound desc = could not find container \"9d64a3cc585c1214cd0bcc0b238ada2e334d9d2ae6a8ddb25ab644132462302a\": container with ID starting with 9d64a3cc585c1214cd0bcc0b238ada2e334d9d2ae6a8ddb25ab644132462302a not found: ID does not exist" Dec 13 04:21:52 crc kubenswrapper[4766]: I1213 04:21:52.612457 4766 scope.go:117] "RemoveContainer" containerID="903749d2f3ff03c28fd5625aeaaca2f7a8092464862c769d429e9a1d53977aa3" Dec 13 04:21:52 crc kubenswrapper[4766]: E1213 04:21:52.612762 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"903749d2f3ff03c28fd5625aeaaca2f7a8092464862c769d429e9a1d53977aa3\": container with ID starting with 903749d2f3ff03c28fd5625aeaaca2f7a8092464862c769d429e9a1d53977aa3 not found: ID does not exist" containerID="903749d2f3ff03c28fd5625aeaaca2f7a8092464862c769d429e9a1d53977aa3" Dec 13 04:21:52 crc kubenswrapper[4766]: I1213 04:21:52.612804 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"903749d2f3ff03c28fd5625aeaaca2f7a8092464862c769d429e9a1d53977aa3"} err="failed to get container status \"903749d2f3ff03c28fd5625aeaaca2f7a8092464862c769d429e9a1d53977aa3\": rpc error: code = NotFound desc = could not find container \"903749d2f3ff03c28fd5625aeaaca2f7a8092464862c769d429e9a1d53977aa3\": container with ID starting with 903749d2f3ff03c28fd5625aeaaca2f7a8092464862c769d429e9a1d53977aa3 not found: ID does not exist" Dec 13 04:21:52 crc kubenswrapper[4766]: I1213 04:21:52.612826 4766 scope.go:117] "RemoveContainer" containerID="f59e7f3dd1fded9d7f8778b6cfa1521ee507edf770f8b03576ba7072f4910c1e" Dec 13 04:21:52 crc kubenswrapper[4766]: E1213 04:21:52.613132 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f59e7f3dd1fded9d7f8778b6cfa1521ee507edf770f8b03576ba7072f4910c1e\": container with ID starting with f59e7f3dd1fded9d7f8778b6cfa1521ee507edf770f8b03576ba7072f4910c1e not found: ID does not exist" containerID="f59e7f3dd1fded9d7f8778b6cfa1521ee507edf770f8b03576ba7072f4910c1e" Dec 13 04:21:52 crc kubenswrapper[4766]: I1213 04:21:52.613160 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f59e7f3dd1fded9d7f8778b6cfa1521ee507edf770f8b03576ba7072f4910c1e"} err="failed to get container status \"f59e7f3dd1fded9d7f8778b6cfa1521ee507edf770f8b03576ba7072f4910c1e\": rpc error: code = NotFound desc = could not find container \"f59e7f3dd1fded9d7f8778b6cfa1521ee507edf770f8b03576ba7072f4910c1e\": container with ID starting with f59e7f3dd1fded9d7f8778b6cfa1521ee507edf770f8b03576ba7072f4910c1e not found: ID does not exist" Dec 13 04:21:53 crc kubenswrapper[4766]: I1213 04:21:53.624574 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fec1c27-d573-4d8f-b8d3-d42a933f256f" path="/var/lib/kubelet/pods/0fec1c27-d573-4d8f-b8d3-d42a933f256f/volumes" Dec 13 04:22:43 crc kubenswrapper[4766]: I1213 04:22:43.162391 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ntkfr"] Dec 13 04:22:43 crc kubenswrapper[4766]: E1213 04:22:43.163817 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fec1c27-d573-4d8f-b8d3-d42a933f256f" containerName="extract-content" Dec 13 04:22:43 crc kubenswrapper[4766]: I1213 04:22:43.163839 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fec1c27-d573-4d8f-b8d3-d42a933f256f" containerName="extract-content" Dec 13 04:22:43 crc kubenswrapper[4766]: E1213 04:22:43.163865 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fec1c27-d573-4d8f-b8d3-d42a933f256f" containerName="registry-server" Dec 13 04:22:43 crc kubenswrapper[4766]: I1213 04:22:43.163872 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fec1c27-d573-4d8f-b8d3-d42a933f256f" containerName="registry-server" Dec 13 04:22:43 crc kubenswrapper[4766]: E1213 04:22:43.163892 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fec1c27-d573-4d8f-b8d3-d42a933f256f" containerName="extract-utilities" Dec 13 04:22:43 crc kubenswrapper[4766]: I1213 04:22:43.163898 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fec1c27-d573-4d8f-b8d3-d42a933f256f" containerName="extract-utilities" Dec 13 04:22:43 crc kubenswrapper[4766]: I1213 04:22:43.164061 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fec1c27-d573-4d8f-b8d3-d42a933f256f" containerName="registry-server" Dec 13 04:22:43 crc kubenswrapper[4766]: I1213 04:22:43.165505 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ntkfr" Dec 13 04:22:43 crc kubenswrapper[4766]: I1213 04:22:43.184913 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ntkfr"] Dec 13 04:22:43 crc kubenswrapper[4766]: I1213 04:22:43.260885 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e576ce11-9da1-47ab-9ba2-f54af5fd83ff-catalog-content\") pod \"community-operators-ntkfr\" (UID: \"e576ce11-9da1-47ab-9ba2-f54af5fd83ff\") " pod="openshift-marketplace/community-operators-ntkfr" Dec 13 04:22:43 crc kubenswrapper[4766]: I1213 04:22:43.261039 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9jlk\" (UniqueName: \"kubernetes.io/projected/e576ce11-9da1-47ab-9ba2-f54af5fd83ff-kube-api-access-x9jlk\") pod \"community-operators-ntkfr\" (UID: \"e576ce11-9da1-47ab-9ba2-f54af5fd83ff\") " pod="openshift-marketplace/community-operators-ntkfr" Dec 13 04:22:43 crc kubenswrapper[4766]: I1213 04:22:43.261075 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e576ce11-9da1-47ab-9ba2-f54af5fd83ff-utilities\") pod \"community-operators-ntkfr\" (UID: \"e576ce11-9da1-47ab-9ba2-f54af5fd83ff\") " pod="openshift-marketplace/community-operators-ntkfr" Dec 13 04:22:43 crc kubenswrapper[4766]: I1213 04:22:43.362772 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9jlk\" (UniqueName: \"kubernetes.io/projected/e576ce11-9da1-47ab-9ba2-f54af5fd83ff-kube-api-access-x9jlk\") pod \"community-operators-ntkfr\" (UID: \"e576ce11-9da1-47ab-9ba2-f54af5fd83ff\") " pod="openshift-marketplace/community-operators-ntkfr" Dec 13 04:22:43 crc kubenswrapper[4766]: I1213 04:22:43.363372 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e576ce11-9da1-47ab-9ba2-f54af5fd83ff-utilities\") pod \"community-operators-ntkfr\" (UID: \"e576ce11-9da1-47ab-9ba2-f54af5fd83ff\") " pod="openshift-marketplace/community-operators-ntkfr" Dec 13 04:22:43 crc kubenswrapper[4766]: I1213 04:22:43.364049 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e576ce11-9da1-47ab-9ba2-f54af5fd83ff-utilities\") pod \"community-operators-ntkfr\" (UID: \"e576ce11-9da1-47ab-9ba2-f54af5fd83ff\") " pod="openshift-marketplace/community-operators-ntkfr" Dec 13 04:22:43 crc kubenswrapper[4766]: I1213 04:22:43.364219 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e576ce11-9da1-47ab-9ba2-f54af5fd83ff-catalog-content\") pod \"community-operators-ntkfr\" (UID: \"e576ce11-9da1-47ab-9ba2-f54af5fd83ff\") " pod="openshift-marketplace/community-operators-ntkfr" Dec 13 04:22:43 crc kubenswrapper[4766]: I1213 04:22:43.364598 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e576ce11-9da1-47ab-9ba2-f54af5fd83ff-catalog-content\") pod \"community-operators-ntkfr\" (UID: \"e576ce11-9da1-47ab-9ba2-f54af5fd83ff\") " pod="openshift-marketplace/community-operators-ntkfr" Dec 13 04:22:43 crc kubenswrapper[4766]: I1213 04:22:43.383891 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9jlk\" (UniqueName: \"kubernetes.io/projected/e576ce11-9da1-47ab-9ba2-f54af5fd83ff-kube-api-access-x9jlk\") pod \"community-operators-ntkfr\" (UID: \"e576ce11-9da1-47ab-9ba2-f54af5fd83ff\") " pod="openshift-marketplace/community-operators-ntkfr" Dec 13 04:22:43 crc kubenswrapper[4766]: I1213 04:22:43.488153 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ntkfr" Dec 13 04:22:43 crc kubenswrapper[4766]: I1213 04:22:43.897173 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ntkfr"] Dec 13 04:22:43 crc kubenswrapper[4766]: I1213 04:22:43.949143 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ntkfr" event={"ID":"e576ce11-9da1-47ab-9ba2-f54af5fd83ff","Type":"ContainerStarted","Data":"06dc5de5c570250ad60bd0763bdce9534466da38381916af52fd481d92d45fde"} Dec 13 04:22:44 crc kubenswrapper[4766]: I1213 04:22:44.959599 4766 generic.go:334] "Generic (PLEG): container finished" podID="e576ce11-9da1-47ab-9ba2-f54af5fd83ff" containerID="a0775c42d4fc6965ed25e85393a3798dc07df252f7f2cab00ebadc8a564c067b" exitCode=0 Dec 13 04:22:44 crc kubenswrapper[4766]: I1213 04:22:44.959721 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ntkfr" event={"ID":"e576ce11-9da1-47ab-9ba2-f54af5fd83ff","Type":"ContainerDied","Data":"a0775c42d4fc6965ed25e85393a3798dc07df252f7f2cab00ebadc8a564c067b"} Dec 13 04:22:44 crc kubenswrapper[4766]: I1213 04:22:44.962371 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 13 04:22:45 crc kubenswrapper[4766]: I1213 04:22:45.978224 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ntkfr" event={"ID":"e576ce11-9da1-47ab-9ba2-f54af5fd83ff","Type":"ContainerStarted","Data":"03940aea0c4250517af538b3222298dedac65f0aefc798f36caf94f801ca912a"} Dec 13 04:22:46 crc kubenswrapper[4766]: I1213 04:22:46.994839 4766 generic.go:334] "Generic (PLEG): container finished" podID="e576ce11-9da1-47ab-9ba2-f54af5fd83ff" containerID="03940aea0c4250517af538b3222298dedac65f0aefc798f36caf94f801ca912a" exitCode=0 Dec 13 04:22:46 crc kubenswrapper[4766]: I1213 04:22:46.994912 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ntkfr" event={"ID":"e576ce11-9da1-47ab-9ba2-f54af5fd83ff","Type":"ContainerDied","Data":"03940aea0c4250517af538b3222298dedac65f0aefc798f36caf94f801ca912a"} Dec 13 04:22:48 crc kubenswrapper[4766]: I1213 04:22:48.085202 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ntkfr" event={"ID":"e576ce11-9da1-47ab-9ba2-f54af5fd83ff","Type":"ContainerStarted","Data":"f60a4281dabb727aa3988a27a61e79eeaa7bdd9a1b3a63dbef308247d4ae64ae"} Dec 13 04:22:48 crc kubenswrapper[4766]: I1213 04:22:48.106134 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ntkfr" podStartSLOduration=2.640653815 podStartE2EDuration="5.106118514s" podCreationTimestamp="2025-12-13 04:22:43 +0000 UTC" firstStartedPulling="2025-12-13 04:22:44.961684033 +0000 UTC m=+2296.471617037" lastFinishedPulling="2025-12-13 04:22:47.427148772 +0000 UTC m=+2298.937081736" observedRunningTime="2025-12-13 04:22:48.105895148 +0000 UTC m=+2299.615828132" watchObservedRunningTime="2025-12-13 04:22:48.106118514 +0000 UTC m=+2299.616051478" Dec 13 04:22:53 crc kubenswrapper[4766]: I1213 04:22:53.489322 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ntkfr" Dec 13 04:22:53 crc kubenswrapper[4766]: I1213 04:22:53.489971 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ntkfr" Dec 13 04:22:53 crc kubenswrapper[4766]: I1213 04:22:53.534364 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ntkfr" Dec 13 04:22:54 crc kubenswrapper[4766]: I1213 04:22:54.183119 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ntkfr" Dec 13 04:22:54 crc kubenswrapper[4766]: I1213 04:22:54.242404 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ntkfr"] Dec 13 04:22:56 crc kubenswrapper[4766]: I1213 04:22:56.156672 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ntkfr" podUID="e576ce11-9da1-47ab-9ba2-f54af5fd83ff" containerName="registry-server" containerID="cri-o://f60a4281dabb727aa3988a27a61e79eeaa7bdd9a1b3a63dbef308247d4ae64ae" gracePeriod=2 Dec 13 04:22:57 crc kubenswrapper[4766]: I1213 04:22:57.655312 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ntkfr" Dec 13 04:22:57 crc kubenswrapper[4766]: I1213 04:22:57.814410 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e576ce11-9da1-47ab-9ba2-f54af5fd83ff-utilities\") pod \"e576ce11-9da1-47ab-9ba2-f54af5fd83ff\" (UID: \"e576ce11-9da1-47ab-9ba2-f54af5fd83ff\") " Dec 13 04:22:57 crc kubenswrapper[4766]: I1213 04:22:57.814916 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e576ce11-9da1-47ab-9ba2-f54af5fd83ff-catalog-content\") pod \"e576ce11-9da1-47ab-9ba2-f54af5fd83ff\" (UID: \"e576ce11-9da1-47ab-9ba2-f54af5fd83ff\") " Dec 13 04:22:57 crc kubenswrapper[4766]: I1213 04:22:57.815010 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9jlk\" (UniqueName: \"kubernetes.io/projected/e576ce11-9da1-47ab-9ba2-f54af5fd83ff-kube-api-access-x9jlk\") pod \"e576ce11-9da1-47ab-9ba2-f54af5fd83ff\" (UID: \"e576ce11-9da1-47ab-9ba2-f54af5fd83ff\") " Dec 13 04:22:57 crc kubenswrapper[4766]: I1213 04:22:57.816302 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e576ce11-9da1-47ab-9ba2-f54af5fd83ff-utilities" (OuterVolumeSpecName: "utilities") pod "e576ce11-9da1-47ab-9ba2-f54af5fd83ff" (UID: "e576ce11-9da1-47ab-9ba2-f54af5fd83ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:22:57 crc kubenswrapper[4766]: I1213 04:22:57.827587 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e576ce11-9da1-47ab-9ba2-f54af5fd83ff-kube-api-access-x9jlk" (OuterVolumeSpecName: "kube-api-access-x9jlk") pod "e576ce11-9da1-47ab-9ba2-f54af5fd83ff" (UID: "e576ce11-9da1-47ab-9ba2-f54af5fd83ff"). InnerVolumeSpecName "kube-api-access-x9jlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:22:57 crc kubenswrapper[4766]: I1213 04:22:57.872103 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e576ce11-9da1-47ab-9ba2-f54af5fd83ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e576ce11-9da1-47ab-9ba2-f54af5fd83ff" (UID: "e576ce11-9da1-47ab-9ba2-f54af5fd83ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 04:22:57 crc kubenswrapper[4766]: I1213 04:22:57.916912 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e576ce11-9da1-47ab-9ba2-f54af5fd83ff-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 04:22:57 crc kubenswrapper[4766]: I1213 04:22:57.916958 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e576ce11-9da1-47ab-9ba2-f54af5fd83ff-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 04:22:57 crc kubenswrapper[4766]: I1213 04:22:57.916974 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9jlk\" (UniqueName: \"kubernetes.io/projected/e576ce11-9da1-47ab-9ba2-f54af5fd83ff-kube-api-access-x9jlk\") on node \"crc\" DevicePath \"\"" Dec 13 04:22:58 crc kubenswrapper[4766]: I1213 04:22:58.175067 4766 generic.go:334] "Generic (PLEG): container finished" podID="e576ce11-9da1-47ab-9ba2-f54af5fd83ff" containerID="f60a4281dabb727aa3988a27a61e79eeaa7bdd9a1b3a63dbef308247d4ae64ae" exitCode=0 Dec 13 04:22:58 crc kubenswrapper[4766]: I1213 04:22:58.175125 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ntkfr" event={"ID":"e576ce11-9da1-47ab-9ba2-f54af5fd83ff","Type":"ContainerDied","Data":"f60a4281dabb727aa3988a27a61e79eeaa7bdd9a1b3a63dbef308247d4ae64ae"} Dec 13 04:22:58 crc kubenswrapper[4766]: I1213 04:22:58.175158 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ntkfr" event={"ID":"e576ce11-9da1-47ab-9ba2-f54af5fd83ff","Type":"ContainerDied","Data":"06dc5de5c570250ad60bd0763bdce9534466da38381916af52fd481d92d45fde"} Dec 13 04:22:58 crc kubenswrapper[4766]: I1213 04:22:58.175208 4766 scope.go:117] "RemoveContainer" containerID="f60a4281dabb727aa3988a27a61e79eeaa7bdd9a1b3a63dbef308247d4ae64ae" Dec 13 04:22:58 crc kubenswrapper[4766]: I1213 04:22:58.175361 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ntkfr" Dec 13 04:22:58 crc kubenswrapper[4766]: I1213 04:22:58.214686 4766 scope.go:117] "RemoveContainer" containerID="03940aea0c4250517af538b3222298dedac65f0aefc798f36caf94f801ca912a" Dec 13 04:22:58 crc kubenswrapper[4766]: I1213 04:22:58.222888 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ntkfr"] Dec 13 04:22:58 crc kubenswrapper[4766]: I1213 04:22:58.228884 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ntkfr"] Dec 13 04:22:58 crc kubenswrapper[4766]: I1213 04:22:58.248222 4766 scope.go:117] "RemoveContainer" containerID="a0775c42d4fc6965ed25e85393a3798dc07df252f7f2cab00ebadc8a564c067b" Dec 13 04:22:58 crc kubenswrapper[4766]: I1213 04:22:58.285403 4766 scope.go:117] "RemoveContainer" containerID="f60a4281dabb727aa3988a27a61e79eeaa7bdd9a1b3a63dbef308247d4ae64ae" Dec 13 04:22:58 crc kubenswrapper[4766]: E1213 04:22:58.286084 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f60a4281dabb727aa3988a27a61e79eeaa7bdd9a1b3a63dbef308247d4ae64ae\": container with ID starting with f60a4281dabb727aa3988a27a61e79eeaa7bdd9a1b3a63dbef308247d4ae64ae not found: ID does not exist" containerID="f60a4281dabb727aa3988a27a61e79eeaa7bdd9a1b3a63dbef308247d4ae64ae" Dec 13 04:22:58 crc kubenswrapper[4766]: I1213 04:22:58.286149 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f60a4281dabb727aa3988a27a61e79eeaa7bdd9a1b3a63dbef308247d4ae64ae"} err="failed to get container status \"f60a4281dabb727aa3988a27a61e79eeaa7bdd9a1b3a63dbef308247d4ae64ae\": rpc error: code = NotFound desc = could not find container \"f60a4281dabb727aa3988a27a61e79eeaa7bdd9a1b3a63dbef308247d4ae64ae\": container with ID starting with f60a4281dabb727aa3988a27a61e79eeaa7bdd9a1b3a63dbef308247d4ae64ae not found: ID does not exist" Dec 13 04:22:58 crc kubenswrapper[4766]: I1213 04:22:58.286184 4766 scope.go:117] "RemoveContainer" containerID="03940aea0c4250517af538b3222298dedac65f0aefc798f36caf94f801ca912a" Dec 13 04:22:58 crc kubenswrapper[4766]: E1213 04:22:58.286532 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03940aea0c4250517af538b3222298dedac65f0aefc798f36caf94f801ca912a\": container with ID starting with 03940aea0c4250517af538b3222298dedac65f0aefc798f36caf94f801ca912a not found: ID does not exist" containerID="03940aea0c4250517af538b3222298dedac65f0aefc798f36caf94f801ca912a" Dec 13 04:22:58 crc kubenswrapper[4766]: I1213 04:22:58.286569 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03940aea0c4250517af538b3222298dedac65f0aefc798f36caf94f801ca912a"} err="failed to get container status \"03940aea0c4250517af538b3222298dedac65f0aefc798f36caf94f801ca912a\": rpc error: code = NotFound desc = could not find container \"03940aea0c4250517af538b3222298dedac65f0aefc798f36caf94f801ca912a\": container with ID starting with 03940aea0c4250517af538b3222298dedac65f0aefc798f36caf94f801ca912a not found: ID does not exist" Dec 13 04:22:58 crc kubenswrapper[4766]: I1213 04:22:58.286588 4766 scope.go:117] "RemoveContainer" containerID="a0775c42d4fc6965ed25e85393a3798dc07df252f7f2cab00ebadc8a564c067b" Dec 13 04:22:58 crc kubenswrapper[4766]: E1213 04:22:58.286856 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0775c42d4fc6965ed25e85393a3798dc07df252f7f2cab00ebadc8a564c067b\": container with ID starting with a0775c42d4fc6965ed25e85393a3798dc07df252f7f2cab00ebadc8a564c067b not found: ID does not exist" containerID="a0775c42d4fc6965ed25e85393a3798dc07df252f7f2cab00ebadc8a564c067b" Dec 13 04:22:58 crc kubenswrapper[4766]: I1213 04:22:58.286902 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0775c42d4fc6965ed25e85393a3798dc07df252f7f2cab00ebadc8a564c067b"} err="failed to get container status \"a0775c42d4fc6965ed25e85393a3798dc07df252f7f2cab00ebadc8a564c067b\": rpc error: code = NotFound desc = could not find container \"a0775c42d4fc6965ed25e85393a3798dc07df252f7f2cab00ebadc8a564c067b\": container with ID starting with a0775c42d4fc6965ed25e85393a3798dc07df252f7f2cab00ebadc8a564c067b not found: ID does not exist" Dec 13 04:22:59 crc kubenswrapper[4766]: I1213 04:22:59.626155 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e576ce11-9da1-47ab-9ba2-f54af5fd83ff" path="/var/lib/kubelet/pods/e576ce11-9da1-47ab-9ba2-f54af5fd83ff/volumes" Dec 13 04:23:39 crc kubenswrapper[4766]: I1213 04:23:39.732790 4766 patch_prober.go:28] interesting pod/machine-config-daemon-94w9l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 04:23:39 crc kubenswrapper[4766]: I1213 04:23:39.733305 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 04:24:09 crc kubenswrapper[4766]: I1213 04:24:09.732630 4766 patch_prober.go:28] interesting pod/machine-config-daemon-94w9l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 04:24:09 crc kubenswrapper[4766]: I1213 04:24:09.733231 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94w9l" podUID="71e6a48b-4f5d-4299-9c7b-98dbe11e670e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"